Tag Archives: open data

Tweeting tides updated

Almost a year ago I implemented a system that took tidal data which was openly published by Land Information New Zealand for some ports around the country and published them as real-time status updates on Twitter.  Each time there is a high or low tide, a tweet is sent that says what the time is, if it is a high tide or low tide, how high or low the tide is, and when the next tide will arrive.  Other than adding a few more tide tables, I’ve not had to touch the system since then.

Recently however, Twitter made changes to their authentication system which meant that the previous system of simply sending a web request along with a username and password is not accepted anymore.  Instead, clients have to make use of oAuth. I won’t go into this in much detail, other than to say it is a great system for web-based or desktop clients, but for server side scripts that are not used by ‘users’ as such, it is a bit of a pain. Well,several hours of pain, including copying tokens from session cookies etc, but it is working fine now.

Whilst I was fixing this, I thought I’d make use of one of Twitters newer features, the ability to geo-tag tweets.  Since the tide tables relate to specific ports, they make obvious candidates to geo-tag.  I have the longitude and latitude values for each of the ports, so these are now added to the tweets.  These render differently in different twitter clients, but this is how TweetDeck looks:

In case you wanted to see any of the feeds, here they are:

site

Tweeting temporal tidal data

There are movements worldwide to free not only research publications through the Open Access publishing movement, but also to make data sets free and open. In New Zealand work in this area is being championed by the public OpenGovt.gov.nz site which has a useful open data catalogue of online open government created data sets. Having been involved with Open Access publishing for a few years due to my involvement with open access repositories, I thought I’d better start to get more involved.

One of my favourite Twitter feeds is that of @NZ_quake which is run by Simon Lyall. This twitter feed periodically polls the GeoNet website which lists the latest earthquakes to occur in New Zealand (quite a regular occurrence!). When it sees that a new earthquake has been reported, it sends a tweet:

nzquakeThis got me thinking about other temporal data sets that could be usefully turned into a Twitter feed. Having lived in coastal areas for the past 12 years, my thoughts turned to the tides. Tides are constantly changing, and knowing the current state of the tide can be important. I thought it would be good to create twittering tide tables (or to ‘twitterify’ the name, twides!!!)

Luckily for me, there is plenty of open data in this area. For New Zealand, comprehensive data is provided on the Land Information New Zealand web site. Data is provided for sixteen standard ports, and a further hundred or so secondary ports. The data is available in either CSV or PDF format (I chose the former), and despite the website only offering this year’s and next year’s data, a bit of URL tweaking can also grab the data for 2011 and 2012.

There is no obvious use or re-use licence on the tides page, just a disclaimer and a link to a Crown Copyright declaration which does (commendably) include an open licence:

The material may be used, copied and re-distributed free of charge in any format or media. Where the material is redistributed to others the following acknowledgement note should be shown: “Sourced from LINZ. Crown Copyright reserved.”

A quick script can take this data (one row per day) and re-format it as one tide (high or low) per line with a date-stamp. Another quick little script runs every minute via a cron job, and checks each of the ports to see if it is currently high or low tide there. If it is, it sends a tweet using the Twitter API

aklexampleI have created twitter feeds for three New Zealand ports so far:

  1. Auckland
  2. Wellington
  3. Onehunga

There is also a combined feed of all the tides at http://twitter.com/alltwides. If there are any other New Zealand ports that you would like to have a Twitter feed for, please feel free to get in touch as I have a simple script to create new feeds. Or if you know of other tide tables that are exposed via Twitter I’d be interested to see them.

Does Twitter provide a useful outlet for temporal data, or for tide tables? I’d be interested in your opinions! Please leave a comment below. This entry was posted in Uncategorized and tagged , , on by .

Preserving reactions to Lord Of The Rings

‘Preserving reactions to Lord Of The Rings’ is a funny blog posting title, but I’ll explain…

Back in 2003 to 2004, our department of Theatre, Film and Television Studies undertook the biggest audience response survey to a film ever. They collected just short of 25,000 responses to the films from speakers of 14 different languages. The project is now finished, published, and they’re hoping to move on to even bigger projects of the same type. So the work is ready to archive in our repository, and its my job to archive the data in such as way as to enable and ensure preservation.

Now, I’m no preservation expert, so the following details what I did to archive the data which was given to us in the form of a Microsoft Access database, and a word document explaining the structure of the database and the codings it used:

  • The database: Well, nothing wrong as such with archiving an Access database – it can easily be used by people today. So that gets archived. But what about a long-term copy for archival and preservation purposes? Access has a nice handy ‘Export to xml’ feature. That looks good! It even gives the option to ensure the file is correctly encoded in UTF-8 to preserve the audience responses in different character sets. (As an aside, the xml file is about 40MB big, so I found in order to get an xml editor to open the file in order to validate it and check the encoding I had to upgrade the RAM on my Vista workstation from 2Gb to 4GB!).
  • The guidance notes: These came in Microsoft Word format, nice and easy, so that gets archived. A PDFa copy is then created using Microsoft Word’s ‘Export to PDF’ option, and that is archived too.
  • The repository: All this is stored in a DSpace-powered repository, has daily file checksum checks being run to detect bit-rot, backed up nightly to disk and tape, with off-site copies of the tapes stored.
Now to a non-preservation expert, this all sounds too easy. Have I been naive and missed any thing out? (Wouldn’t surprise me! 🙂 )

online for mobi