Update on technical development of our DSpace submission tool

Just uploaded a new technical report by Ian Wellaway to our repository: http://hdl.handle.net/10036/3847

This report reviews the approaches that were identified earlier this year as possible solutions to the ‘big data’ upload issue: using the default DSpace upload tool; using third-party software and tools; developing a bespoke solution for Exeter.

Ian outlines the development work that has been done in these areas and the outcomes. For a time we have been developing two prototypes concurrently – one that could, ideally, be easily reused by other HEIs, and a more bespoke tool catering for Exeter’s specific needs but with less cross-institutional transferability.

Various tools and applications are evaluated and discussed: SWORD, sworduploader, EasyDeposit and Globus FTP.

Hope this will be of interest to other MRD projects and wider.

Posted under Big Data, Reports, Technical development

This post was written by Jill Evans on October 2, 2012

Tags: , , ,

Technical Update

In the technical part of the project, now we have upgraded our live and test Dspace installations to the latest version (1.8.2) and switched to Oracle 11g (our previous version of Postgres needed to be upgraded and our support at the Univeristy is much better for Oracle), we’ve been able to move onto looking at developing a submission tool that can cope with some quite heavy research data loads.

The most important scenarios from our perspective are:

  1. How to get large data into Dspace (Gigabytes and possibly even Terabytes)
  2. How to submit data composed of many different files (without zipping them up first)

Feedback from researchers gathered by our colleagues in the library has shown these issues to be very important, and critically, the ‘out of the box’ Dspace submission feature does not handle these very well.

To combat these, we are looking developing two different prototype submission tools:

  1. A SWORD based submission tool using Python
  2. A submission tool using the SWORD service document but then submitting via the Dspace command line import sccript

By developing each tool in parallel we hope to determine which works the best. We have so far found that while the Dspace command line tool can submit large files (successfully submitted a 6GB piece of data), the SWORD tool has hit upon some issues.

However, we hope to eventually have at least one fully working solution, if not two, that can be sued to submit data of any shape or size.

Posted under Big Data, Technical development

This post was written by Ian Wellaway on July 31, 2012

Zen Archiving: an Open Exeter Case Study in Astrophysics

Posting this on behalf of Tom Haworth. Tom is a 2nd year Postgraduate in Astrophysics and has been commissioned by us to write a case study documenting the process of transferring large amounts of data (TBs) from a HPC (zen) to the Exeter Data Archive.

We are interested in the whole process – from deciding what to keep and what to delete to data bundling and metadata entry. The Astrophysics Group is using the process to develop policy and guidelines on use of zen to store and manage data.

The following are some initial thoughts on how to kick off the process:

Zen Archiving: an Open Exeter Case Study in Astrophysics

Summary:

– The archiving process will have to take place from the command line (or a gui) on zen-viz.
– Tom Haworth will develop a script that takes user-entered metadata, potentially compresses the file, and sends both directly to the archiving server.
– The Open Exeter IT team has sufficient information to perform the archiving server-end work. They are also considering command line retrieval of data.
– The kind of data that we expect to archive is completed models. Necessary software to view the data should be included too.
– Email and WIKI entries are all that will be required for training.

Where is the data
Data will be stored on zen at one of /archive/, /scratch/ or/data/. archive and scratch are not under warranty.

What kind of data needs to be archived
There will be a range of data of different file formats, some not seen outside of the astrophysics community. These can be collected and compressed, if not by the user then potentially by the submission script at run-time. Compression is not always worth doing so a list of compression-worthy extensions could be stored.

The data to archive will probably be on a model-by-model basis rather than publication, but publication details will be included in the metadata. This will probably be governed by the size of the files.

Data to be archived should be completed models.

What will happen to the data on zen
This will probably be determined on a case-by-case basis depending on how frequently (if at all) the data is required. Data that has no imminent further use should be removed.

For example, I would be archiving some finished models but may also need them for my thesis.

How might extraction from the archive work from the command line?
– searching could still take place on the web
– extraction would rely on direct communication with the archiving server

Policy for archiving
Should avoid letting any user on zen archive absolutely anything and everything. Need:
 guidelines on what should be archived
 We can track how much people have been archiving and communicate with them if it looks like they are abusing it.

Metadata verification for senior users is not required. PhD students could have their submission metadata verified by their supervisor.

Metadata
Metadata is required to ensure that the data is properly referenced and can be found easily.
Entries are Title, Author, Publisher, Date Issued, URL, Abstract, Keywords, Type etc.

In HPC astrophysics there will likely be additional entries of use such as the code used to generate the data. I suggest using an “Additional Comments” field.

This information will be requested at the command line when archiving.

The archiving procedure on zen
It will be completely impractical to archive the data through the web interface. It will also be impractical to download the data onto a local machine and then archive it (local machines probably will not even have the capacity to store the data). The ideal situation will be one in which data can be archived straight from zen, communicating directly with the storage server and sending the appropriate metadata in addition.

This should happen from the zen visualization node, so as not to grind the login node to a halt.

A simple command line script would be all that is required.

Basic archive script
Read in name of thing to archive
Check the size of the thing to archive
Communicate with the archiving server to check if the quota will be exceeded
If quota not exceeded
Get metadata from user (some could be stored in a .config file for each user)
Check if the file extension is in the list of those that are worth compressing
Compress if worthwhile
Copy metadata and dataToArchive across to the archiving server
Else
Tell the user to contact the person responsible for updating quota sizes.
End

A gui version could also be implemented if desired, but would definitely not be necessary for zen.

At present Tom Haworth is going to develop this script and test the procedure on existing data. Pete Leggett of Open Exeter will develop the server end stuff.

Training

For zen users, essentially no training will be required. An email to the zen mailing list telling them what they need to do is standard procedure. They can also contact the zen manager if they have trouble. Can also add a section to the zen component of the astrophysics WIKI so that there is some permanent documentation.

Posted under Big Data, Case studies

This post was written by Jill Evans on May 31, 2012

Tags: , , ,

PGR feedback on data upload

Last week we asked our group of PGRs to test upload of data to the Exeter Data Archive. I was particularly interested in seeing how they would respond to the interface and the metadata web form.

The following are some of the comments that we received – some of these relate specifically to how DSpace works but some are of general interest:

• Add a sentence to the current licence making it clear that depositors can ask to remove their data/outputs.

• It’s important to be able to see inside a zip file.

• How can multiple files be uploaded?

• It would be used more if it were possible to upload from your own drive – drag and drop rather than entering metadata through the web interface.

• A ‘wizard’ like process would be really helpful.

• Would like a template structure for storing previously entered metadata, this could be selected later for further related deposits.

• Keywords – need intuitive text to appear in boxes otherwise will get an inconsistent and inaccurate list of keywords.

• Upload speed – varied between PGRs, Mac users found it much quicker – 100mb audio file uploaded in about 30 seconds; 700mb took 20 mins to upload with a Mac.

• The Submit button needs to be much clearer

• Do you need to login before you upload or could you choose to upload and then have to login – which is better?

• Metadata – people will cut corners if it’s too onerous.

• Would be good to be able to add projects to the hierarchy (i.e., DSpace Communities structure)

• DPA – is it contravening DPA if even an administrator can see sensitive data?

• Data could be encrypted as well as being stored in a ‘dark archive’.

• An upload manager would be a really useful feature – you could queue files for upload and then just leave them.

• Important to add contact details of depositor (PI, etc.), especially email address.

• Clearer help and guidance; make mandatory fields clearer.  Title – more specific guidance, is this title of the deposit or depositor.

• Would be useful to have a dropdown list of your previous submissions, you could then choose to link things together (e.g., paper & data), and make the process easier.

• Confused about the difference between date of publication and date of creation – publication is date it becomes publicly available and is need by DataCite – but DSpace doesn’t automatically assign this detail to the ‘publication’ field.

• Need a more comprehensive list of data types than default Dublin Core list.

Posted under Big Data, Metadata, Technical development

This post was written by Jill Evans on May 31, 2012

Tags: , ,

Case study – The Cricket-Tracking Project

Other JISC MRD projects or those working with ‘big data’ may be interested in a case study that has been written for Open Exeter by Dr Jacq Christmas (http://hdl.handle.net/10036/3556).

The case study documents the process of reviewing, preparing, uploading and describing multiple large video files. The project that generated the files is investigating the behaviour of crickets through analysis of thousands of hours of motion-triggered video.

The project is interesting to us for a number of reasons:

• It is a cross-disciplinary/cross-departmental project – these sort of projects are becoming increasingly common at Exeter and do throw up interesting questions around the area of ‘ownership’
• Huge amounts of data have been and continue to be produced
• Storage is a problem due to the number and size of files – most files are stored on external hard drives held in various places
• As there is no central storage system, secure backup can be a problem
• Ditto secure sharing
• The first batch of video is in a proprietary format that requires specific software in order to be viewable

The case study sets out quite clearly the thought that should be given to selecting and preparing files for upload to a repository. We are looking at how the procedures described can be adapted as templates to guide researchers from other disciplines through the deposit process, some aspects of which will always be generic, for example:

• Listing and explaining the various file formats and how they are related
• Selecting a set of metadata fields to describe the files
• Thinking about the structure of the data in the repository and how it links to related resources, projects and collections

One issue that has arisen from this case study, that we were already well aware of, is the preference to deposit research in a project or research group collection rather than a generic departmental or College collection. In many cases the sense of belonging to or affinity with a group is stronger than departmental ties. This is a tricky one for us: DSpace structure centres on a hierarchy of communities, sub-communities and collections; once these have been set up and start to be populated, it is difficult to make significant changes. Add to that the fact that our CRIS, Symplectic, has been painstakingly mapped across to all our existing communities and collections and any structural changes become even more problematic. For the moment we are looking at a possible metadata solution (dc****.research group ??). I’d be interested to hear how others deal with the research project/group requirement.

We’re about to start a similar test case study with Astrophysics and later in the year with an AHRC-funded project based in Classics and Ancient History. It will be interesting to see if the approach taken in these areas are significantly different, or given different emphasis.

I won’t say that our first case study has allowed us to resolve the many issues raised yet but we are at least more aware of what is important to researchers and can start to take steps to find solutions.

Posted under Big Data, Case studies

This post was written by Jill Evans on May 28, 2012

Tags: , , ,