Top Tips for Developing Research Group-level RDM Policy

The Open Exeter and Marine Renewable Energy policy case study, published today, suggests some tips for other research groups who are thinking about designing their own research data management policies. The recommendations are as follows:

  • Research group level policy development should be collaborative and include consultation with all members of the research group as far as possible. Feedback from the research community should be listened to; participation in policy development can give researchers a sense of ownership and make the policy implementation phase easier.
  • It can be helpful to separate out the principles of a policy from the nitty-gritty of procedures; thus those who don’t wish to read a longer, more detailed document can understand the main points quickly and refer to the procedural document only when necessary.
  • Local research data management policies should be updated to reflect changes in institutional, funder and ethical, legal and commercial guidelines and these should be considered during policy development.
  • Consider institutional as well as local and discipline-specific solutions. For example, if your institution provides a data repository, would it be better to use this for the long-term storage of data, rather than local storage or should data sets be stored in a discipline-specific repository?
  • Decide on the scope of the policy; different research groups have different priorities – for example, a Psychology-based group would probably be more concerned with ethical and legal issues to do with working with human participants. It may be worth concentrating first on priority areas and rolling out a more comprehensive policy at a later date.
  • Try to balance the amount of detail in the procedural document with respecting researchers’ working habits. For example, is it necessary for all researchers to use the same system to name files?
  • Work out an estimated timetable for policy and procedure development but be flexible to reflect changing circumstances if necessary.
  • Consider the relationship between guidelines for individual projects and research group policy.
  • Tailor RDM policy and procedures to the support available to your research group. For example, a group with a dedicated Computing Development Officer may be able to put into place more bespoke solutions than a group without this support.
  • Listen to researchers’ concerns and make sure they are clearly addressed in the policy and procedural documents.
  • Provide support for the initial transition. Staff may not have time to do tasks such as consolidate and transfer old data sets to a central storage system, as they are busy with current and future work, and rarely have the time to look backwards.

Have you developed a research-group level RDM policy? Do you agree with these recommendations or have any of your own suggestions? Let us know!

Posted under Case studies, News, Policy, Research

This post was written by Hannah Lloyd-Jones on July 26, 2013

Tags: , , , ,

Zen Archiving: an Open Exeter Case Study in Astrophysics

Posting this on behalf of Tom Haworth. Tom is a 2nd year Postgraduate in Astrophysics and has been commissioned by us to write a case study documenting the process of transferring large amounts of data (TBs) from a HPC (zen) to the Exeter Data Archive.

We are interested in the whole process – from deciding what to keep and what to delete to data bundling and metadata entry. The Astrophysics Group is using the process to develop policy and guidelines on use of zen to store and manage data.

The following are some initial thoughts on how to kick off the process:

Zen Archiving: an Open Exeter Case Study in Astrophysics

Summary:

– The archiving process will have to take place from the command line (or a gui) on zen-viz.
– Tom Haworth will develop a script that takes user-entered metadata, potentially compresses the file, and sends both directly to the archiving server.
– The Open Exeter IT team has sufficient information to perform the archiving server-end work. They are also considering command line retrieval of data.
– The kind of data that we expect to archive is completed models. Necessary software to view the data should be included too.
– Email and WIKI entries are all that will be required for training.

Where is the data
Data will be stored on zen at one of /archive/, /scratch/ or/data/. archive and scratch are not under warranty.

What kind of data needs to be archived
There will be a range of data of different file formats, some not seen outside of the astrophysics community. These can be collected and compressed, if not by the user then potentially by the submission script at run-time. Compression is not always worth doing so a list of compression-worthy extensions could be stored.

The data to archive will probably be on a model-by-model basis rather than publication, but publication details will be included in the metadata. This will probably be governed by the size of the files.

Data to be archived should be completed models.

What will happen to the data on zen
This will probably be determined on a case-by-case basis depending on how frequently (if at all) the data is required. Data that has no imminent further use should be removed.

For example, I would be archiving some finished models but may also need them for my thesis.

How might extraction from the archive work from the command line?
– searching could still take place on the web
– extraction would rely on direct communication with the archiving server

Policy for archiving
Should avoid letting any user on zen archive absolutely anything and everything. Need:
 guidelines on what should be archived
 We can track how much people have been archiving and communicate with them if it looks like they are abusing it.

Metadata verification for senior users is not required. PhD students could have their submission metadata verified by their supervisor.

Metadata
Metadata is required to ensure that the data is properly referenced and can be found easily.
Entries are Title, Author, Publisher, Date Issued, URL, Abstract, Keywords, Type etc.

In HPC astrophysics there will likely be additional entries of use such as the code used to generate the data. I suggest using an “Additional Comments” field.

This information will be requested at the command line when archiving.

The archiving procedure on zen
It will be completely impractical to archive the data through the web interface. It will also be impractical to download the data onto a local machine and then archive it (local machines probably will not even have the capacity to store the data). The ideal situation will be one in which data can be archived straight from zen, communicating directly with the storage server and sending the appropriate metadata in addition.

This should happen from the zen visualization node, so as not to grind the login node to a halt.

A simple command line script would be all that is required.

Basic archive script
Read in name of thing to archive
Check the size of the thing to archive
Communicate with the archiving server to check if the quota will be exceeded
If quota not exceeded
Get metadata from user (some could be stored in a .config file for each user)
Check if the file extension is in the list of those that are worth compressing
Compress if worthwhile
Copy metadata and dataToArchive across to the archiving server
Else
Tell the user to contact the person responsible for updating quota sizes.
End

A gui version could also be implemented if desired, but would definitely not be necessary for zen.

At present Tom Haworth is going to develop this script and test the procedure on existing data. Pete Leggett of Open Exeter will develop the server end stuff.

Training

For zen users, essentially no training will be required. An email to the zen mailing list telling them what they need to do is standard procedure. They can also contact the zen manager if they have trouble. Can also add a section to the zen component of the astrophysics WIKI so that there is some permanent documentation.

Posted under Big Data, Case studies

This post was written by Jill Evans on May 31, 2012

Tags: , , ,

PGR feedback on data upload

Last week we asked our group of PGRs to test upload of data to the Exeter Data Archive. I was particularly interested in seeing how they would respond to the interface and the metadata web form.

The following are some of the comments that we received – some of these relate specifically to how DSpace works but some are of general interest:

• Add a sentence to the current licence making it clear that depositors can ask to remove their data/outputs.

• It’s important to be able to see inside a zip file.

• How can multiple files be uploaded?

• It would be used more if it were possible to upload from your own drive – drag and drop rather than entering metadata through the web interface.

• A ‘wizard’ like process would be really helpful.

• Would like a template structure for storing previously entered metadata, this could be selected later for further related deposits.

• Keywords – need intuitive text to appear in boxes otherwise will get an inconsistent and inaccurate list of keywords.

• Upload speed – varied between PGRs, Mac users found it much quicker – 100mb audio file uploaded in about 30 seconds; 700mb took 20 mins to upload with a Mac.

• The Submit button needs to be much clearer

• Do you need to login before you upload or could you choose to upload and then have to login – which is better?

• Metadata – people will cut corners if it’s too onerous.

• Would be good to be able to add projects to the hierarchy (i.e., DSpace Communities structure)

• DPA – is it contravening DPA if even an administrator can see sensitive data?

• Data could be encrypted as well as being stored in a ‘dark archive’.

• An upload manager would be a really useful feature – you could queue files for upload and then just leave them.

• Important to add contact details of depositor (PI, etc.), especially email address.

• Clearer help and guidance; make mandatory fields clearer.  Title – more specific guidance, is this title of the deposit or depositor.

• Would be useful to have a dropdown list of your previous submissions, you could then choose to link things together (e.g., paper & data), and make the process easier.

• Confused about the difference between date of publication and date of creation – publication is date it becomes publicly available and is need by DataCite – but DSpace doesn’t automatically assign this detail to the ‘publication’ field.

• Need a more comprehensive list of data types than default Dublin Core list.

Posted under Big Data, Metadata, Technical development

This post was written by Jill Evans on May 31, 2012

Tags: , ,

Case study – The Cricket-Tracking Project

Other JISC MRD projects or those working with ‘big data’ may be interested in a case study that has been written for Open Exeter by Dr Jacq Christmas (http://hdl.handle.net/10036/3556).

The case study documents the process of reviewing, preparing, uploading and describing multiple large video files. The project that generated the files is investigating the behaviour of crickets through analysis of thousands of hours of motion-triggered video.

The project is interesting to us for a number of reasons:

• It is a cross-disciplinary/cross-departmental project – these sort of projects are becoming increasingly common at Exeter and do throw up interesting questions around the area of ‘ownership’
• Huge amounts of data have been and continue to be produced
• Storage is a problem due to the number and size of files – most files are stored on external hard drives held in various places
• As there is no central storage system, secure backup can be a problem
• Ditto secure sharing
• The first batch of video is in a proprietary format that requires specific software in order to be viewable

The case study sets out quite clearly the thought that should be given to selecting and preparing files for upload to a repository. We are looking at how the procedures described can be adapted as templates to guide researchers from other disciplines through the deposit process, some aspects of which will always be generic, for example:

• Listing and explaining the various file formats and how they are related
• Selecting a set of metadata fields to describe the files
• Thinking about the structure of the data in the repository and how it links to related resources, projects and collections

One issue that has arisen from this case study, that we were already well aware of, is the preference to deposit research in a project or research group collection rather than a generic departmental or College collection. In many cases the sense of belonging to or affinity with a group is stronger than departmental ties. This is a tricky one for us: DSpace structure centres on a hierarchy of communities, sub-communities and collections; once these have been set up and start to be populated, it is difficult to make significant changes. Add to that the fact that our CRIS, Symplectic, has been painstakingly mapped across to all our existing communities and collections and any structural changes become even more problematic. For the moment we are looking at a possible metadata solution (dc****.research group ??). I’d be interested to hear how others deal with the research project/group requirement.

We’re about to start a similar test case study with Astrophysics and later in the year with an AHRC-funded project based in Classics and Ancient History. It will be interesting to see if the approach taken in these areas are significantly different, or given different emphasis.

I won’t say that our first case study has allowed us to resolve the many issues raised yet but we are at least more aware of what is important to researchers and can start to take steps to find solutions.

Posted under Big Data, Case studies

This post was written by Jill Evans on May 28, 2012

Tags: , , ,

OR2012

Good news for Open Exeter – we heard that our paper on archiving PGR data has been accepted for OR2012 in Edinburgh. We are all planning on attending so hope to catch up with other MRD02 projects in July.

Posted under News

This post was written by Jill Evans on April 30, 2012

Tags: , ,

Archiving PGR research data?

As we finish the third week of our investigations into RDM practice around the University, we’re a little surprised by a common factor that is starting to emerge from interviews: concern about what happens to PGRs’ data when they leave the University at the end of their studies.

We had some idea from conversations with PGRs that they themselves have questions about what happens to student data when someone leaves. The most consistent comment is that since there are no policies or guidelines of any sort, data will probably sit on a hard drive or external drive in an office somewhere until either the device fails or no-one can figure out how to access the files again.

For PGRs this is a problem for two main reasons:
• Students would like to receive recognition for their work and feel it is being valued and reused to contribute to building knowledge in their academic field. If the data is more accessible, it will have greater impact and enhance their career development.
• Typically this research data is unavailable for incoming students to build on; they will be aware that the research has taken place but due to the lack of policy on recording and storing PGR data, they (and their supervisors) have no way of locating it.

For researchers, where PGR research has been incorporated into project/research group activities, continuing access to raw data is critical.

Researchers may be aware that previous research is relevant to current students supervised but again, cannot access the original data. This can lead to reduplication of effort.

Additionally, it can be useful to have access to restrictions-free raw data as a tool to teach research skills and methodologies to incoming students.

Until this point, we hadn’t really considered that there might be a role for the project in providing continuing access to PGR data. However, there is clearly a (relatively) quick win opportunity for us here: we already mandate thesis deposit to our research outputs repository, ERIC, which we are looking at integrating with our data archive; we already allow deposit of supplementary files, such as video and audio when they’re an integral part of the thesis. It’s only a comparatively small next step to then permit (or even mandate?) deposit of underlying data. It’s an aim we will certainly incorporate into our scheme of work over the next few months.

Are other projects coming across a similar situation?

Posted under Follow the Data

This post was written by Jill Evans on March 2, 2012

Tags: , ,

What is Data? Some responses from PGRs

We asked the question above of every student we interviewed when we were recruiting PGRs for our Follow the Data work strand. These are some of the responses – make of it what you will!

Collected data
Anything you’ve created rather than sources
Facts that are collected and stored for later analysis
Quantitative not qualitative
Could be anything
Information that’s collected
Quantitative and qualitative
Anything: Word docs, interviews, questionnaires, video, emails even
Raw materials that’s gathered – it could be your own or other people’s
It isn’t data that you generate yourself
Depends on the discipline
Optimisation results
Data generated for future analysis by conducting research
Measurements
Online databases
Open to interpretation
Material you use, such as books
Material that’s already there
Raw data that’s produced
Data from books, video clips, photos from archives, mp3s, YouTube
My own performances
Primary texts
Sources: books, journals, articles, ebooks, YouTube, performances
What is created: annotated bibliographies, Word docs, web sites, diagrams, tables, Paint docs
Published data
Historical environment records
Unpublished data
Empirical data
Comparative legal data
Metadata
Bibliographies
Numbers
Thoughts
Statistical analysis
Microscope images for analysis
My own published papers
Lab books
Audio visual
Technical manuals
Archives
Artefacts…

Posted under Follow the Data

This post was written by Jill Evans on January 27, 2012

Tags: