Tag Archives: open access

Facebook advertising Open Access “Are you a researcher?”

2012 has been a busy year in the world of Open Access.  From a UK funding point of view, the big news has included the Finch Report and the RCUK’s reaction this in its new Policy on Access to Research Outputs.  To cut a very long story short, the RCUK is now providing £17+ million to UK institutions (pro-rata to the size of grants) to help fund Gold Open Access: that is, payments of Article Processing Charges (APCs) in order to make journal papers free at the point of use, from the publishers’ website, with a Creative Commons By Attribution (CC-BY) licence, at the time of first publication.  There are many on-going debates about how to proportion this money, exactly what is covers, and how best to administrate and report the spending.

An unsurprising reaction to this has been from the open access hybrid publishers.  Pure Open Access Publishers (BioMed Central, PLoS etc) already run their business model this way.  Traditional publishers have had to introduce hybrid approaches to allow Gold APCs to paid to make papers available which would have normally been funded by subscriptions.  The latest changes for hybrid publishers have been to take into account the requirement for the CC-BY licence.  An example is the Nature Publishing Group who have introduced differential pricing based on the Creative Commons licence selected, for example to make up for the shortfall of income from reprints.  Another example is Wiley and their new Open Access schemes.

However the point of this blog post was my surprise at logging into Facebook this morning…

springerfacebook

In case you missed it, here is one advert in particular that I’ve not seen before…

areyouaresearcher

Clicking on this takes you to Springer’s web page on Open Access:

springer

Springer is advertising on Facebook to let authors know about their journals and open access publishing options, and most importantly, that there is money from RCUK to back it up (for RCUK-funded outputs).

I don’t want to pass judgement on this, I don’t really have an opinion on it, however it is an interesting development!  Those of us who work closely with these Open Access initiatives and the RCUK block grants need to be aware of the messages that are being put out there.  This is a new message in a new medium!

A prize will be offered for the first (genuine!!!) enquiry received about Open Access and the RCUK funding from an author who ‘saw it on Facebook’!  It will be interesting to see how well this message propagates and is understood.сколько стоит обслуживание сайта украина

Tweeting tides updated

Almost a year ago I implemented a system that took tidal data which was openly published by Land Information New Zealand for some ports around the country and published them as real-time status updates on Twitter.  Each time there is a high or low tide, a tweet is sent that says what the time is, if it is a high tide or low tide, how high or low the tide is, and when the next tide will arrive.  Other than adding a few more tide tables, I’ve not had to touch the system since then.

Recently however, Twitter made changes to their authentication system which meant that the previous system of simply sending a web request along with a username and password is not accepted anymore.  Instead, clients have to make use of oAuth. I won’t go into this in much detail, other than to say it is a great system for web-based or desktop clients, but for server side scripts that are not used by ‘users’ as such, it is a bit of a pain. Well,several hours of pain, including copying tokens from session cookies etc, but it is working fine now.

Whilst I was fixing this, I thought I’d make use of one of Twitters newer features, the ability to geo-tag tweets.  Since the tide tables relate to specific ports, they make obvious candidates to geo-tag.  I have the longitude and latitude values for each of the ports, so these are now added to the tweets.  These render differently in different twitter clients, but this is how TweetDeck looks:

In case you wanted to see any of the feeds, here they are:

site

Library Mashups book – Chapter 17 now Open Access

Library Mashups book cover imageA new book ‘Library Mashups – Exploring new ways to delivery library data‘ has now been published. The book, edited by Nicole Engard, has a great list of 25 authors from all across the globe, including well known names in the library-tech world such as Tim Spalding, Ross Singer, Bess Sadler and Bonaria Biancu. The chapters cover subjects from the basics such as ‘What is a mashup?’ and ‘Making your data available to be mashed up’, to loads of very specific library-oriented chapters such as ‘Mashing up with librarian knowledge’, ‘Breaking into the OPAC’ and ‘Mashups with Worldcat affiliate services’. There is also a section of the book about interacting with other types of services such as maps, pictures and videos.

Why am I writing about this? Well, for three reasons:

1) The book is great. I’ve learned a lot from it, and have enjoyed reading it. I particularly like this quote by Tim Spalding (of LibraryThing.com) in his chapter “Breaking into the OPAC”:

As a computer programmer with no experience of the library world, I figured this [helping libraries to add LibraryThing data to their catalogues] would be a simple problem to solve. Of course I found out that the library world was different. The code behind its systems was closed and unextensible, with virtually no APIs in or out.

Read his chapter to hear his experiences and answers.

2) The second reason is that I am one of the lucky authors who has been able to contribute to the book. Chapter 17 is “The Repository Mashup Map” which looks at the development of the Repository66 mashup map of Open Access repositories across the world. The chapter explores why the mashup was created, how it was created, and (hopefully) most usefully some of the design decisions that need to be taken into account when making a mashup (decisions related to when and how to download the data, how to match sources, and when and where to manipulate the data etc).

3) However, the main reason for this blog post is to say that a copy of the chapter has now been published online ‘Open Access’. You can find it in the DSpace repository we run at the University of Auckland Library:

Download URL: http://hdl.handle.net/2292/5258

I hope that you find it useful.

[UPDATE 2/Nov/2009]: Chapter 2 of the book ‘Behind the Scenes: Some Technical Details’ by Bonaria Biancu is now also available open access: http://hdl.handle.net/10281/5117klasnolom

Tweeting temporal tidal data

There are movements worldwide to free not only research publications through the Open Access publishing movement, but also to make data sets free and open. In New Zealand work in this area is being championed by the public OpenGovt.gov.nz site which has a useful open data catalogue of online open government created data sets. Having been involved with Open Access publishing for a few years due to my involvement with open access repositories, I thought I’d better start to get more involved.

One of my favourite Twitter feeds is that of @NZ_quake which is run by Simon Lyall. This twitter feed periodically polls the GeoNet website which lists the latest earthquakes to occur in New Zealand (quite a regular occurrence!). When it sees that a new earthquake has been reported, it sends a tweet:

nzquakeThis got me thinking about other temporal data sets that could be usefully turned into a Twitter feed. Having lived in coastal areas for the past 12 years, my thoughts turned to the tides. Tides are constantly changing, and knowing the current state of the tide can be important. I thought it would be good to create twittering tide tables (or to ‘twitterify’ the name, twides!!!)

Luckily for me, there is plenty of open data in this area. For New Zealand, comprehensive data is provided on the Land Information New Zealand web site. Data is provided for sixteen standard ports, and a further hundred or so secondary ports. The data is available in either CSV or PDF format (I chose the former), and despite the website only offering this year’s and next year’s data, a bit of URL tweaking can also grab the data for 2011 and 2012.

There is no obvious use or re-use licence on the tides page, just a disclaimer and a link to a Crown Copyright declaration which does (commendably) include an open licence:

The material may be used, copied and re-distributed free of charge in any format or media. Where the material is redistributed to others the following acknowledgement note should be shown: “Sourced from LINZ. Crown Copyright reserved.”

A quick script can take this data (one row per day) and re-format it as one tide (high or low) per line with a date-stamp. Another quick little script runs every minute via a cron job, and checks each of the ports to see if it is currently high or low tide there. If it is, it sends a tweet using the Twitter API

aklexampleI have created twitter feeds for three New Zealand ports so far:

  1. Auckland
  2. Wellington
  3. Onehunga

There is also a combined feed of all the tides at http://twitter.com/alltwides. If there are any other New Zealand ports that you would like to have a Twitter feed for, please feel free to get in touch as I have a simple script to create new feeds. Or if you know of other tide tables that are exposed via Twitter I’d be interested to see them.

Does Twitter provide a useful outlet for temporal data, or for tide tables? I’d be interested in your opinions! Please leave a comment below. This entry was posted in Uncategorized and tagged , , on by .

Follow Google’s green arrow to open content

There is some more good news for repositories that surfaced this weekend (via Peter Suber’s blog and Klaus Graf) about how Google Scholar now highlights results that have open access versions of papers by the addition of a green flag / arrow / triangle.

Google continues its behaviour of showing the publishers version of the paper as the first result, but where it does do this, it also lists the open version next to the title:

This should make Google Scholar much more useful, as one of the common arguments held against it in the OA world is that it puts the publishers version first, even if it isn’t open but there is an open version available. Thanks Google!

As a closing remark, I’ll comment on Peter Suber’s closing remark in his blog post:

Note the first item on the return list for this search:

The green triangle points to a version of an article with a Google address.  Is Google also entering the OA archiving business?

For all we know Google may be entering the OA archiving business, but in this case it is just a PDF hosted on a http://pages.google.com/ ‘Google Page Creator’ site (now ‘Google Sites’) which is a simple hosting facility provided by Google to anyone.пример объявления

Google bring Scholar richness into normal search results

Some good news for open access repository advocates: It seems that the normal Google search engine has now started bringing the richness of Google Scholar results into the main Google search results. This extra information includes:

  • The (first) author’s name
  • Links to papers that have cited it
  • Links to related articles
  • Links to other versions

For me this is great news. When we go out selling repositories to academics, one of our arguments is “your paper will appear in Google Scholar, and other specialist search engines such as Intute Repository Search and OAIster“. However, if we are honest, how many people use these, and I’m including Google Scholar in this, as their first point of call? Not many I suspect.

So getting this extra information into Google is a big selling point as we now get the richness of Google Scholar into our default search service.

This example shows a paper written a couple of years ago by Jon Bell and myself about using OAI-PMH and METS to move items between repositories, and you can see the extra metadata from Google Scholar being shown.биржи копирайтеров обзор