GitHub to repository deposit

Over the past few months there have been positive shifts in the infrastructure available to archive software.  To ‘archive software’ can mean many things to many people, but for the purposes of this blog post, I’ll take the view that this is to take (well managed) code out of an existing source code control system, make a point-in-time snapshot of the code, and deposit that into a long-term repository, along with some basic descriptive metadata.

To this end, both Figshare and Zenodo have recently developed and released integrations into GitHub.  These both allow the depositor to easily take a copy of their code from GitHub, and deposit it into the respective repository.  One of the key benefits of doing this is that the repository platforms are then able to assign a persistent DataCite DOI (Digital Object Identifier) to the software, which makes it easier to cite and track through scholarly literature.

As one of the developers of the open SWORD deposit protocol that facilitates the deposit of resources into repositories, I thought it would be good to try and re-create this functionality using SWORD.  Below is the ‘recipe’ of how this works…

Step one (optional): Setup your browser with a bookmark
To make it easier to deposit code from GitHub, you can install a ‘bookmarklet‘ that automatically detects that GitHub repository, and lets the deposit system know where this is.  This means that from any GitHub repository, you can click on the bookmark to deposit the code.  To install it, visit and drag the bookmarklet at the bottom of the page to your browser’s bookmark bar:

Install bookmarklet

Step two: Choose the GitHub repository to deposit

GitHub makes use of accounts and repositories.  Each user of the service has an account, and each account can create multiple code repositories.  URLs for GitHub are in the form of{account}/{repository}, for example the PHP programming language is stored in GitHub: (php is the account name, and php-src is the code repository for the PHP language).

Choose the GitHub repository that you wish to deposit in the repository by opening the repository in your browser.  In the example below, this is the DSpace repository platform’s code repository:

Choose repository

Step three: Click the bookmark!
If you click the ‘GitHub Deposit’ bookmark that you created earlier, this will redirect you to a SWORD deposit system.  The bookmarklet contains javascript that passes the URL of the GitHub repository to the deposit client, and populates the form automatically.  Alternatively you can just visit and enter the URL of the repository yourself:

Click bookmark

Step four: Download the code

Clicking ‘Next >’ will initiate the download of the latest version of the code (‘master’ in git terminology).  Depending on the size of the repository, this may take a few seconds.  The code isn’t doing anything clever, and unlike the Zenodo and Figshare integrations, it doesn’t make use of the GitHub API.  Instead, it downloads the file by constructing a URL such as   It then uses basic metadata such as the title of the repository (title), the account holder (author), the URL of the repository (link) and the latest check-in comment and revision hash (abstract).  These are then presented back to you to confirm:

Verify metadata

Step five: Perform the deposit
Upon clicking the deposit button, the code will then translate the metadata into a METS file, and zip that up alongside the downloaded code bundle.  All this is then deposited into the demo DSpace server (  Assuming the deposit works, you’ll be presented with the URL of the deposited code.  In this case, it is a ‘handle’, but to all intents and purposes that is a DOI, and DSpace can be configured to issue DOIs.

Handle issued

Step six: View the code

To see the deposited code in the repository, just click on the handle link!  For example, This will take you to the repository, where the metadata can be seen, and the code downloaded!

Code in the repository

This isn’t a highly polished integration, and was thrown together in a couple of hours, by adding it as an optional ‘step’ in the configurable web-based deposit client ‘EasyDeposit‘.  But it is a good demonstration that creating small tools that archive code into SWORD-compliant repositories (DSpace, EPrints, Fedora, etc) can be achieved quite quickly!site

ResourceSync and SWORD

resync_logoThis is the third post is a short series of blog posts about ResourceSync.  Thanks to Jisc funding, a small number of us from the UK have been involved in the NISO / OAI ResourceSync Initiative.  This has involved attending several meetings of the Technical Committee to help design the standard, working on documenting some of the different ResourceSync use cases, and working on some trial implementations.  As mentioned in the previous blog posts, I’ve been creating a PHP API library that makes it easy to interact with ResourceSync-enabled services.

In order to really test the library, it is good to think of a real end-to-end use case and implement it.  The use case I chose to do this was to mirror one repository to another, and to then keep it up to date.  This first involves a baseline sync to gather all the content, followed by an incremental sync of changes made each day.

ResourceSync provides the mechanism by which to gather the resources from the remote repository.  However another function is then required to take those resources and put them into the destination repository.  The obvious choice for this is SWORD v2.

ResourceSync is designed to list all files (or changed files) on a server.  These are then transferred using good old HTTP, but to get them into another repository requires a deposit protocol – in this case, SWORD.  In other words, ResourceSync is used to harvest the resources onto my computer, and SWORD is then used to deposit them into a destination repository.

The challenge here is linking resources together.  An ‘item’ in a repository is typically made up of a metadata resource, along with one or more associated file resources.  Because these are separate resources, they are listed independently in the ResourceSync resource lists.  However they contain attributes that link them together: ‘describes’ and ‘describedBy’.  The metadata ‘describes’ the file, and the file is ‘describedBy’ the metadata.  A good example of this is given in the CottageLabs description of how the OAI-PMH use case can be implemented using ResourceSync:

<urlset xmlns=""

<rs:ln rel="resourcesync" href=""/>
<rs:md capability="resourcelist" modified="2013-01-03T09:00:00Z"/>

<rs:ln rel="describes" href=""/>
<rs:ln rel="describedBy" href=""/>
<rs:ln rel="collection" href=""/>
<rs:md hash="md5:1584abdf8ebdc9802ac0c6a7402c03b6"

<rs:ln rel="describedBy" href=""/>
<rs:ln rel="describedBy" href=""/>
<rs:ln rel="collection" href=""/>
<rs:md hash="md5:1e0d5cb8ef6ba40c99b14c0237be735e"

So here’s the recipe (and here’s the code) for syncing a resource list such as this, and then depositing it into a remote repository using SWORD.  Both use PHP libraries, which makes the code quite short.

The recipe

$resourcelist = new ResyncResourcelist(‘’);
$resourcelist->registerCallback(function($file, $resyncurl) {
// Work out if this is a metadata object or a file
global $metadataitems, $objectitems;
$type = ‘metadata’;
$namespaces = $resyncurl->getXML()->getNameSpaces(true);
if (!isset($namespaces[‘sm’])) $sac_ns[‘sm’] = ‘’;
$lns = $resyncurl->getXML()->children($namespaces[‘rs’])->ln;
$key = ”;
$owner = ”;
foreach($lns as $ln) {
if (($ln->attributes()->rel == ‘describedby’) && ($ln->attributes()->href != ‘’)) {
$type = ‘object’;
$key = $resyncurl->getLoc();
$owner = $ln->attributes()->href;

echo ‘ – New file saved: ‘ .$file . "\n";
echo ‘  – Type: ‘ . $type . "\n";

if ($type == ‘metadata’) {
$metadataitems[] = $resyncurl;
} else {
$objectitems[(string)$key] = $resyncurl;

This piece of code is performing a baseline sync, and is using the callback registration option mentioned in the last blog.  The callback is just doing one thing: sorting the metadata objects into one list, and the file objects into another.  These will then be processed later.

Next, each metadata item is processed in order to deposit that metadata object into the destination repository using SWORD v2:

foreach ($metadataitems as $item) {
echo " – Item " . ++$counter . ‘ of ‘ . count($metadataitems) . "\n";
echo "  – Metadata file: " . $item->getFileOnDisk() . "\n";
$namespaces = $xml->getNameSpaces(true);
if (!isset($namespaces[‘dc’])) $sac_ns[‘dc’] = ‘’;
if (!isset($namespaces[‘dcterms’])) $sac_ns[‘dc’] = ‘’;
$dc = $xml->children($namespaces[‘dc’]);
$dcterms = $xml->children($namespaces[‘dcterms’]);
$title = $dc->title[0];
$contributor = $dc->contributor[0];
$id = $dc->identifier[0];
$date = $dcterms->issued[0];
echo ‘   – Location: ‘ . $item->getLoc() . "\n";
echo ‘   – Author: ‘ . $contributor . "\n";
echo ‘   – Title: ‘ . $title . "\n";
echo ‘   – Identifier: ‘ . $id . "\n";
echo ‘   – Date: ‘ . $date . "\n";

// Create the atom entry
$test_dirin = ‘atom_multipart’;
$atom = new PackagerAtomTwoStep($resync_test_savedir, $sword_deposit_temp, ”, ”);
$atom->addMetadata(‘creator’, $contributor);

// Deposit the metadata record
$atomfilename = $resync_test_savedir . ‘/’ . $sword_deposit_temp . ‘/atom’;
echo ‘  – About to deposit metadata: ‘ . $atomfilename . "\n";
$deposit = $sword->depositAtomEntry($sac_deposit_location,

This option being used here is to first create an atom entry that contains the metadata, and depositing that.  The SWORD v2 ‘in-progress’ flag is being set to TRUE, which indicates that further activity will take place to the record.

The code then needs to look through the list of file resources, and find any that are ‘describedBy’ the metadata record in question.  Any that are, are deposited to the same record using SWORD v2:

// Find related files for this metadata record
foreach($objectitems as $object) {
if ((string)$object->getOwner() == (string)$item->getLoc()) {
$finfo = finfo_open(FILEINFO_MIME_TYPE);
$mime = finfo_file($finfo, $object->getFileOnDisk());
echo ‘    – Related object: ‘ . $object->getLoc() . "\n";
echo ‘     – File: ‘ . $object->getFileOnDisk() . ‘ (‘ . $mime . ")\n";

// Deposit file
$deposit = $sword->addExtraFileToMediaResource($edit_media,

Using the SWORD v2 API library is very easy: once you have the file and its MIME type, it is a single line of code to add that file to the record in the destination repository.

Once all the related files have been added, the final step is to set the ‘in-progress’ flag to FALSE to indicate that the object is complete, and that it can be formally archived into the repository.  This is a simple as:

// Complete the deposit
$deposit = $sword->completeIncompleteDeposit($edit_iri,

The end to end process has now taken place – the items have been harvested using ResourceSync, and then deposited back using SWORD v2.


The default DSpace implementation of the SWORD v2 protocol allows items to deposited, updated, and deleted.  It does this by keeping items in the workflow, and when the ‘In-progress’ flag is set to false, the deposit is completed by moving it out of the workflow and into the main archive.  Once the item is moved into the main archive, it can no longer be edited using SWORD.

This is a sensible approach for most situations.  Once an item has been formally ingested, it is under the control of the archive manager, and the original depositor should probably not have the rights to make further changes.

However in the case of performing a synchronisation with ResrcoueSync, the master copy of the data is in a remote repository, and that should therefore be allowed to overwrite data that is formally archived in the repository.  This is an implementation option though, and if an alternative WorkflowManager was written, this could be changed.

[Update: 20th June 2013.  I have now edited the default WorkflowManager, to make one that permits updates to items that are in workflow or in the archive.  This overcomes this limitation.  I hope to add this as a configurable option to a future release of DSpace.]


ResourceSync and SWORD are two complementary interoperability protocols. ResourceSync can be used to harvest all content from one site, and SWORD used to deposit that content into another.

ResourceSync can differentiate between new, updated, and deleted content.  SWORD v2 also allows these interactions, so can be used to reflect those changes as they happen.как разместить объявление в контакте

Resourcesync: Making things happen with callbacks

resync_logoIn a previous blog post I introduced the ResourceSync PHP API library.  This is a code library written PHP that makes it easy to interact with web sites that support the new ResourceSync standard.  The default behavior for the code when scynchronising with a server either during a baseline sync (complete sync) or a incremental sync (of only changed files since the last baseline sync) is to simply download the files and store them on disk in the same directories as they exist on the server.

However, unless you want to just store the files for backup purposes, the chances are that you’ll want to process them in some way.  There are two ways to do this, either perform the synchronisation, and then process the files, or process them as they are downloaded.

From the last post, you’ll know that by using the ResourceSync PHP library, performing a sync can be as simple as:

include ‘ResyncResourcelist.php’;
$resourcelist = new ResyncResourcelist(‘’);

This will process the resourcelist file by file, and download them to the /resync/ directory.

In order to process these, you need to register a ‘callback’ function with the library.  Each time an item is synchronised, the code in the callback function will be executed.

The following code snippet shows a very simple example of a callback.  This example displays the filename of the resource that has been downloaded, and prints the XML that described the file in the ResourceSync resourcelist.  The XML can be useful as it provides contextual information about the file, such as its size, checksum, last modified date, and links to related items.  Of course some of these will have already been checked by the library (such as last modified date when using the date range option, and the checksum to make sure the file has been retried successfully).

$resourcelist->registerCallback(function($file, $resyncurl) {
echo ‘  – Callback given value of ‘ .$file . "\n";
echo ‘   – XML:’ . "\n" . $resyncurl->getXML()->asXML() . "\n";

When performing a baseline sync using the ResyncResourcelist class it is only possible to register a single callback.  This is called whenever any file is downloaded.

However the ResyncChangelist class allows three different callbacks to be registered, depending on the action: CREATED, UPDATED, or DELETED.

$changelist->registerCreateCallback(function($file, $resyncurl) {
echo ‘  – CREATE Callback given value of ‘ .$file . "\n";
echo ‘   – XML:’ . "\n" . $resyncurl->getXML()->asXML() . "\n";

$changelist->registerUpdateCallback(function($file, $resyncurl) {
echo ‘  – UPDATE Callback given value of ‘ .$file . "\n";
echo ‘   – XML:’ . "\n" . $resyncurl->getXML()->asXML() . "\n";

$changelist->registerDeleteCallback(function($file, $resyncurl) {
echo ‘  – DELETE Callback given value of ‘ .$file . "\n";
echo ‘   – XML:’ . "\n" . $resyncurl->getXML()->asXML() . "\n";

Depending on the purpose of your code, it is likely that you would want to handle these three types of events in different ways, hence the three callback options.

In the next blog post, I’ll show an example of this code in action, as it uses the callback to look at each resource’s XML to discover whether it is a metadata file or a related resource.  It then uses this information to deposit the item into a repository using SWORD.разработка и поддержка web сайтов

The ResourceSync PHP Library

resync_logoOver the past year, thanks to funding from the Jisc, I’ve been involved with the NISO / OAI ResourceSync initiative.  The aim of ResourceSync is to provide mechanisms for large-scale synchronisations of web resources.  There are lots of use cases for this, and many reasons why it is an interesting problem.  For some background reading, I’d suggest:

The specification itself can be read at, and a quick read will highlight very quickly that the specification is based on sitemaps ( which is no surprise, given that they were developed for the easy and efficient listing of web resources for search engine crawlers to harvest – which in itself is a specialised form of resource synchronisation.

As with anything new, the proof is always in the pudding, which in this context means that reference implementations are required in order to both test that a standard can be implemented and fulfill the original use cases it was designed to do, but also to smooth off any rough edges that only appear once you use it in anger.

My role therefore has been to develop a PHP ResourceSync client library.  The role of a client library is to allow other software systems to easily interact with a technology – in this case, web servers that support ResourceSync.  The client library therefore provides the facility to connect to a web server and synchronise the contents, and then to stay up to date by loading lists of resources that have been created, updated, or deleted.

The PHP library can be downloaded from:

The rest of this blog post will step through the different parts of ResourceSync, and shows how they can be access by the PHP client library:

The first step is to discover whether a site supports ResourceSync.  The mechanism to do this is by using the well-known URI specification (see: RFC5785).  Put simply, if a server supports ResourceSync, it places a file at which then points to where the capability list exists.

The first function of the PHP ResourceSync library is therefore to support this discovery:

$resyncdiscover = new ResyncDiscover(‘’);
$capabilitylists = $resyncdiscover->getCapabilities();
echo ‘ – There were ‘ . count($capabilitylists) .
‘ capability lists found:’ . "\n";
foreach ($capabilitylists as $capabilties) {
echo ‘ – ‘ . $capabilties . "\n";

Zero, one, or more capability list URIs are returned.  If none are returned, then the site doesn’t support ResourceSync.  If one is returned, the next step is to examine the capability list to see which parts of the ResourceSync protocol are supported:

$resynccapabilities = new ResyncCapabilities(‘’);
$capabilities = $resynccapabilities->getCapabilities();
echo ‘Capabilities’ . "\n";
foreach($capabilities as $capability => $type) {
echo ‘ – ‘ . $capability . ‘ (capability type: ‘ . $type . ‘)’ . "\n";

The output of this is that the specific ResourceSync capabilities supported by that server will be returned.  Typically a resourcelist and a changelist will be shown.

The next step is often to perform a baseline sync (complete download of all resources).  Again, the PHP library supports this:

include ‘ResyncResourcelist.php’;
$resourcelist = new ResyncResourcelist(‘’);
$resourcelist->enableDebug(); // Show progress

It is possible to ask the library how many files it has downloaded, and how large they were:

echo $resourcelist->getDownloadedFileCount() . ‘ files downloaded, and ‘ .
$resourcelist->getSkippedFileCount() . ‘ files skipped’ . "\n";
echo $resourcelist->getDownloadSize() . ‘Kb downloaded in ‘ .
$resourcelist->getDownloadDuration() . ‘ seconds (‘ .
($resourcelist->getDownloadSize() /
$resourcelist->getDownloadDuration()) . ‘ Kb/s)’ . "\n";

It is possible to also restrict the files to be downloaded to those from a certain date.  This can be useful if you only want to synchronise recently created files:

$from = new DateTime("2013-05-18 00:00:00.000000");
$resourcelist->baseline(‘/resync’, $from);

Once a baseline sync has taken place, all of the files exposed via the ResourceSync interface will now exist on the local computer.  The next step is to routinely keep this set of resources up to date.  To do this, depending on the frequency at which the server produces change lists, these should be processed to download new or updated files, and to delete old files:

include ‘ResyncChangelist.php’;
$changelist = new ResyncChangelist(‘’);
$changelist->enableDebug(); // Show progress

Again, there are options to see what files have been processed:

echo ‘ – ‘ . $changelist->getCreatedCount() . ‘ files created’ . "\n";
echo ‘ – ‘ . $changelist->getUpdatedCount() . ‘ files updated’ . "\n";
echo ‘ – ‘ . $changelist-getDeletedCount() . ‘ files deleted’ . "\n";
echo $changelist->getDownloadedFileCount() . ‘ files downloaded, and ‘ .
$changelist->getSkippedFileCount() . ‘ files skipped’ . "\n";
echo $changelist->getDownloadSize() . ‘Kb downloaded in ‘ .
$changelist->getDownloadDuration() . ‘ seconds (‘ .
($changelist->getDownloadSize() /
$changelist->getDownloadDuration()) . ‘ Kb/s)’ . "\n";

Also again, it is possible to only see changes since a particular date.  This can be used to keep note of when the sync was last attempted, meaning only changes made since then are processed:

$from = new DateTime("2013-05-18 00:00:00.000000");
$changelist->process(‘/resync’, $from);

The PHP library allows in a few steps, each consisting of a few lines, for the contents of a ResourceSync enabled server to be kept in sync with a local copy.

A further two blog posts will be published in this series.  The next will show how to interact with the library so that more complex actions can be performed when resources are created, updated, or deleted.  The final blog post will show this in action, with an application of the PHP ResourceSync library making use of the resources it processes.как разместить контекстную рекламу

Facebook advertising Open Access “Are you a researcher?”

2012 has been a busy year in the world of Open Access.  From a UK funding point of view, the big news has included the Finch Report and the RCUK’s reaction this in its new Policy on Access to Research Outputs.  To cut a very long story short, the RCUK is now providing £17+ million to UK institutions (pro-rata to the size of grants) to help fund Gold Open Access: that is, payments of Article Processing Charges (APCs) in order to make journal papers free at the point of use, from the publishers’ website, with a Creative Commons By Attribution (CC-BY) licence, at the time of first publication.  There are many on-going debates about how to proportion this money, exactly what is covers, and how best to administrate and report the spending.

An unsurprising reaction to this has been from the open access hybrid publishers.  Pure Open Access Publishers (BioMed Central, PLoS etc) already run their business model this way.  Traditional publishers have had to introduce hybrid approaches to allow Gold APCs to paid to make papers available which would have normally been funded by subscriptions.  The latest changes for hybrid publishers have been to take into account the requirement for the CC-BY licence.  An example is the Nature Publishing Group who have introduced differential pricing based on the Creative Commons licence selected, for example to make up for the shortfall of income from reprints.  Another example is Wiley and their new Open Access schemes.

However the point of this blog post was my surprise at logging into Facebook this morning…


In case you missed it, here is one advert in particular that I’ve not seen before…


Clicking on this takes you to Springer’s web page on Open Access:


Springer is advertising on Facebook to let authors know about their journals and open access publishing options, and most importantly, that there is money from RCUK to back it up (for RCUK-funded outputs).

I don’t want to pass judgement on this, I don’t really have an opinion on it, however it is an interesting development!  Those of us who work closely with these Open Access initiatives and the RCUK block grants need to be aware of the messages that are being put out there.  This is a new message in a new medium!

A prize will be offered for the first (genuine!!!) enquiry received about Open Access and the RCUK funding from an author who ‘saw it on Facebook’!  It will be interesting to see how well this message propagates and is understood.сколько стоит обслуживание сайта украина

Oh, the admin and the coder should be friends!

This is a tongue-in-cheek blog post a few days before the Open Repositories 2012 conference that is being held here in Edinburgh. I’ll give a bit of background first, a disclaimer, a video, then the main content of this post.

First the background: I have a slight love/hate relationship with the repository community and the Open Repositories conference related to how it makes a strong distinction between ‘Repository Managers’ and ‘Developers’.  Its nice that we do this as it allows for innovate conference strands such as the ‘developers challenge‘ where developers can come and show-off their wares. However I also hate this segregation and the labeling of delegates into these categories.  Personally I see myself as straddling the two, and I feel that we should be looking for our shared interests (developing open repository services) rather than highlighting differences between our roles.

However, I won’t rant or get on my soap-box, but instead I’ll butcher a song from the Rodgers and Hammerstein musical ‘Oklahoma!‘. One of the most famous songs is about how the farmers and the cowboys don’t get along and look for all the differences between themselves, rather than trying to work together to make the most of being settlers in a new territory. (See any similarities?!)

The disclaimer – the song makes the assumption that farmers and cowmen are all male, and that the females stay at home cooking, with the daughters waiting to get married.  In my re-working of the lyrics I’ve been equally sexist and made the repository managers female and the developers male.  This is not representative of my views or of reality, but it fits for the song!  So please don’t hold this against me!  This is just a light-hearted piece!

The song also fits with the OR2012 conference as it talks about the admins and coders (‘repository managers’ and ‘developers’ are too long for the song!) dancing together.  The conference dinner will be ending with a ceilidh, where hopefully there will be much dancing and fun!  If you’ve never seen Oklahoma you can watch a performance of this song below:

So sing along (preferably in your head if you work in a shared office!)

The admin and the coder should be friends.
Oh the admin and the coder should be friends.
One of them likes to bulk upload, the other likes to cut some code,
But that’s no reason why they can’t be friends!

Repository folks should stick together,
Repository folks should all be pals.
Admins dance with the coders’ daughters,
Coders dance with the admins’ gals.

I’d like to say a word for the admin,
She come out west when repos were in beta,
She came out west and built a lot of services,
And uploads PDFs with metadata!

The admin is a good and thrifty citizen,
no matter what the coder says or thinks.
You sometimes see ’em drinkin’ in the tea room.
And always wants download stats when she rings.

But the admin and the coder should be friends.
Oh, the admin and the coder should be friends.
The coder writes a script with ease, the admin holds the OA keys,
But that’s no reason why they can’t be friends.

Repository folks should stick together,
Repository folks should all be pals.
Admins dance with the coders’ daughters,
Coders dance with the admins’ gals.

I’d like to say a word for the coder,
the road he treads is difficult and stoney.
He codes for days on end with just a keyboard for a friend.
I sure do find he’s often tired and moany!

The coder should be sociable with the admin.
If he drops in looking like he needs bath water,
Don’t treat him like a louse make him welcome in your house.
But be sure that you lock up your wife and daughters!

I’d like to teach you all a little saying.
And learn the words by heart the way you should.
I don’t say I’m no better than anybody else,
But I’ll be damned if I ain’t just as good!

Repository folks should stick together,
Repository folks should all be pals.
Admins dance with the coders’ daughters,
Coders dance with the admins’ gals.

Suggestions for better lyrics are most welcome!

[If you want to see the original lyrics, you can view them at:] This entry was posted in Uncategorized and tagged , on by .