Monthly Archives: June 2008

CallManager WebDialer form

Yesterday afternoon I was slightly bored, and was looking for small experiment to do in order to take my mind off of the reports I was in the middle of writing. The experiment ended up taking the form of messing around with the new CISCO 7941 IP phone that landed on my desk a few weeks ago.

I’d previously played a little with the web dialer feature that comes with the CISCO CallManager software which allows you to program the phone via a web interface. The web dialer allows you to either look up a user in the directory, or enter a number manually, and then your phone will automagically dial the number you requested. Cool stuff 🙂

Now I like messing with web forms, and decided to work out how it works, in order to rewrite in a more useful way that suits me. The following is the form that I cam up with, both form my own records, and in case it proves useful for anybody else:

<form action="https://*/webdialer/Webdialer" method="post" target="_new">
    <input type="hidden" name="cmd" value="doMakeCall" />
    <input type="hidden" name="sub" value="false" />
    <input type="hidden" name="red" value="null" />
    <input type="hidden" name="destination" value="**" />
    <input type="image" src="***.jpg" width="150" />
  </form>

*: The URL of your call manager installation

**: The number you wish to dial

***: In this case I used an image to submit the form – I have a page of photos of people I want to ring, and click on their name to ring them.mobi game

Preserving reactions to Lord Of The Rings

‘Preserving reactions to Lord Of The Rings’ is a funny blog posting title, but I’ll explain…

Back in 2003 to 2004, our department of Theatre, Film and Television Studies undertook the biggest audience response survey to a film ever. They collected just short of 25,000 responses to the films from speakers of 14 different languages. The project is now finished, published, and they’re hoping to move on to even bigger projects of the same type. So the work is ready to archive in our repository, and its my job to archive the data in such as way as to enable and ensure preservation.

Now, I’m no preservation expert, so the following details what I did to archive the data which was given to us in the form of a Microsoft Access database, and a word document explaining the structure of the database and the codings it used:

  • The database: Well, nothing wrong as such with archiving an Access database – it can easily be used by people today. So that gets archived. But what about a long-term copy for archival and preservation purposes? Access has a nice handy ‘Export to xml’ feature. That looks good! It even gives the option to ensure the file is correctly encoded in UTF-8 to preserve the audience responses in different character sets. (As an aside, the xml file is about 40MB big, so I found in order to get an xml editor to open the file in order to validate it and check the encoding I had to upgrade the RAM on my Vista workstation from 2Gb to 4GB!).
  • The guidance notes: These came in Microsoft Word format, nice and easy, so that gets archived. A PDFa copy is then created using Microsoft Word’s ‘Export to PDF’ option, and that is archived too.
  • The repository: All this is stored in a DSpace-powered repository, has daily file checksum checks being run to detect bit-rot, backed up nightly to disk and tape, with off-site copies of the tapes stored.
Now to a non-preservation expert, this all sounds too easy. Have I been naive and missed any thing out? (Wouldn’t surprise me! 🙂 )

online for mobi

RSP Summer School 2008 – Repository publicity session

I’m traveling back from the RSP Summer School 2008 which has just finished. It was held up on the Wirral at Thornton Manor. The manor house, formally owned by Lord Leverhulme was a fantastic venue, and the food was top notch! Whilst there I gave an impromptu SWORD masterclass, and chaired the session on ‘Advocacy’.

The first of two presentations was given by Mary Robinson from SHERPA, who talked about her advocacy work for The Depot.

The second of the presentations was given by Niamh Brennan from Trinity College Dublin. This talk just blew everyone away – it was inspirational, practical, based on real experiences, and gave ideas that all the repository managers could take away with them and implement quite easily.

These are some of the insightful snippets I took away with me:

  • TCD populate their repository from their CRIS. Personally I believe this is the way to go, and enjoyed the chance to talk to Niamh about the technical side of the CRIS. I’ve yet to find an open source CRIS, and was wondering if this is because so much local integration would has to take place to stitch a repository into local systems, that once you have done that, the core CRIS might as well be developed from scratch. Niamh says that this is not the case, and that a core CRIS is big enough to be worthwhile sharing (and hopes to share theirs! 🙂 )
  • Get in contact with, and stay in close contact with, your local research support office. They deal with the grant process, and are a good way of making contact with academics. A good time to explain the impact of the funder mandates to researchers is when they are signing the T&Cs for their latest grant.
  • When it comes to publicity events, always celebrate success! Rather than having a formal launch of your repository, why not have a celebration of the items you already have in there, and the impact that has had.
  • Make people feel good – they like that! Possible examples include ringing up academics who have been in the news congratulating them, and offering to archive their work. Or make some of your depositors “Featured Authors”. People like to be ‘featured”!
  • Always try to get drinks at events, and once you have drinks, get some food. Once you have food, get some wine. Once you have wine, get some champagne. Once you have champagne, get a photographer. There’s nothing more that some people like that having a glass of champagne and having their photo taken at an event. This can be a powerful way of attracting senior managers.
There was loads more, but once the slides are uploaded onto the RSP web site I’ll put a link up. I hope I have represented the points made somewhat accurately, and apologise if I haven’t!

This entry was posted in Uncategorized and tagged , , on by .

JISC repository aggregator site

It has been announced that JISC have commissioned the creation of a new repository aggregator site:

JISC Repository Aggregator Website 

JISC funds a wide variety of development projects on behalf of its funding bodies. These projects include consultancies and supporting studies where the main deliverable is a report and projects where the deliverables include products and services as well as these reports. 

The project involves developing a small user community to guide the development of the site, to produce the site and to develop a series of bespoke widgets to draw information from readily available sources of information. 

The overall aim of this demonstrator site will be to enable a user to search for, organise and hand submit information about a range of relevant information about repositories. The repository aggregator will provide a single destination where people interested in repositories can get information about digital repositories. 

Aims and Objectives 

The objectives of the aggregator website are to: 

  • Produce a demonstrator website that can be shown to some members of the repository community to gauge whether they would find such a service useful. Then, make the service available as a public beta offering while plans are made to develop the site further. 
  • Create a customizable and personalisable solution that can adapt to the wide range of information that a user might like to aggregate. 
  • Specifically ensure the service can aggregate with RSS feeds from relevant blogs, the Intute Repository Search service, information from the RSP site including support contacts. Statistics from OpenDOAR and ROAR, Sherpa RoMEO and JULIET, brief explanations of key topics, persistent aggregated search of sources like google scholar and technorati, subject based collection details from IESR, descriptions of useful repository software, e.g. IRstats, feedforward, sword client, manakin and RSS feeds from relevant repositories. 
  • Create focus groups in a structured way to help manage the feeback from the user community at all stages of development. 
  • Specific development requirements include the consideration of the Netvides Universal Widget API, Netvibes Universe, an authentication system, cross browser compatibility. 
  • (No, not ‘widgets as found in cans of beer‘, but widgets as in ‘web widgets‘!)

    It is an interesting development and with my repository stats hat on (http://maps.repository66.org/) I’m particulaly looking forward to seeing what this aggregation can offer, and the value it will provide.game mobi

    Pro mashups book with a CC license

    I’ve just followed a link to a blog from someones email footer, and found a book published this year: Pro Web 2.0 Mashups: Remixing Data and Web Services (Apress, 2008). The blog is by the author of the book Raymond Yee.

    I was attracted to the blog and the book for two reasons:

    1. I love mashups, and when I find the time I like to tinker with my mashup – The Repository Mashup Map. I’m always looking for more ammunition to stuff in my mashups tool box.
    2. My work requires me to work extensively with Open Access Repositories. When I work with academics to examine what could be deposited in a repository we usually end up talking about books, and what can be done with them. Often, and for good and obvious reasons they do not want to archive whole copies of books. However I try to encourage them to look for options such as archiving the metadata along with a copy of the cover of the book, and maybe a sample chapter of two. The metadata can / should of course contain a link to the publishers site and somewhere where it can be purchased. All of this can serve as a good advert for the book and consequentially improve its sales. Raymond has gone to the extreme with this book, and both he and the publisher are to be commended: he has (with the publishers permission) put a copy, licensed by a ‘Creative Commons By-Attribution Non-Commercial Share-Alike license‘ online. Great stuff!

    This is a good license to use – it means anyone working on a commercial mashup would have to buy the book, and the book has to be attributed if it has been used. This could be a good move to spread the word about the book.

    The book can be downloaded chapter by chapter using the following link (http://blog.mashupguide.net/toc/). The book looks excellent and covers a lot of ground. And best of all, I can dip in and out of it a bit online to see if it suits me, and if so, buy a copy.

    тексты для сайта знакомств

    A DRY CRIG Day

    I’ve just returned from the latest CRIG (Common Repository Interfaces Group) meeting at Bath University. In typical CRIG fashion the event was held in a bar, where the food and drink flowed freely. The CRIG team ran an excellent event are are now quite used to running useful, informative and thought proving events.

    This particular CRIG event was entitled ‘CRIG DRY Workshop’ – DRY = Don’t Repeat Yourself, and today that referred to metadata.

    The day started with 5 five minute presentations, one of which was the first public outing for our new project ‘The Deposit Plait‘. This fitted in perfectly with the aims of not repeating ourselves when it comes to depositing items into a repository, and having to typically re-enter metadata.

    There are three strands to a plait, and three strands that we hope to weave together in the deposit plait project:

    1. Work out exactly what metadata ideally needs to be provided when depositing a scholarly work into a repository. This can be done by seeing who makes use of repository metadata, and what metadata they need in order to do this effectively.
    2. Investigate what, if any, metadata can be extracted from XML documents in formats such as OOXML and ODF.
    3. See what metadata can be extracted from online or personal bibliographic systems.

    So were 1 and 2 to yield useful results, we could investigate the feasibility of writing a web service that take an uploaded document (or a reference to one), extracts some metadata (maybe title, author, abstract) and then uses these to pull in more metadata from other systems. Might be neat, might be a non-starter. That’s what the project will discover.

    Anyhow, the event was useful in a number of ways, and there were a number of nice demonstrations. I particularly liked Richard and Rob’s demonstration of depositing OAI-ORE aggregations into DSpace using SWORD. On top of that there was then the resource map encoded in RDFa in the DSpace item page allowing an RDFa reader to use a standard DSpace metadata jumpoff page as an OAI-ORE resource map. The really nice thing about it was that from the DSpace side, it only required a new packager class, and a corresponding entry in the configuration file – I was thinking it might require more tinkering. I also appreciated the ORE talk from Rob. Whilst it only lasted 5 minutes, it was enough to explain the concepts and gave me the dummy’s guide that I’ve been looking out for for some time.

    Thanks CRIG! This entry was posted in Uncategorized and tagged , , , on by .