5 Things I learned at IASSIST

See original posting here.

I just got back from IASSIST 2017 and I have to say...I was very impressed! This year, IASSIST (The International Association for Social Science Information Services & Technology) 2017 was in Lawrence, Kansas from May 23-26, 2017. True to it's name, this conference brought people from all around the world:

These are my top 5 favourite takeaways from IASSIST 2017:

  1. An interesting project that was recently published in PLoS One, Research data management in academic institutions: A scoping review, which was presented as a poster during the conference. This was essentially a systematic review that was designed to describe the volume, topics, and methodologies of existing scholarly work on research data management in academia. They looked at 301 articles out of the original 13,002 titles. They made the data (the text, methods, etc.) available on Zenodo: Dataset for: Research data management in academic institutions: a scoping review!
  2. Packrat: a dependency manager in R that looks to solve the problem of "dependency hell" -- that software depends on other packages to run, and these change all the time with no warning, and these changes can break existing code. Packrat works by making a project specific package library, rather than using R's native package manager (which updates libraries as they are released). This means the R code can be packaged up with its dependencies. However, it doesn't pack the version of R, which can pose problems.
  3. Sam Spencer of the Aristotle metadata registry gave a great talk about work done in the open metadata space, giving a strong usecase: government data hosted on data.gov.au. He shocked the crowd by keeping metadata in CSV format. He asks for 10 basic fields of metadata from users in CSV form -- and there it stays! He mentioned he was scared to admit this to this crowd, but it's yielded good things for him, including data linkages without explicitly doing linked data. He spoke specifically about using this for geo-metadata; you can check out how it's worked out on this map.
  4. One of the more interesting talks I went to was about digital preservation of 3D data! The speaker laid out 5 methods of creation: freeform (like CAD), measurement, observation, "mix," and algorithm/scanning or photogrammetry. 3D data is difficult to preserve mainly because of a lack of standards, particularly metadata standards. The speaker presented a case study that used Dublin Core as a basis for metadata for the Awash National Park Baboon Research Project's 3D data.
  5. The Digital Curation Network gave an update on their initial planning grant. The DCN allows universities to staff share on data curation, which often is too much for one data curator. The first grant allowed six universities to test how local curation practices translates into a network practice. The next phase includes implementation of the network, during which time other institutions can join. The network also came out with centralized steps for curation:
    1. Check data files and read documentation
    2. Understand/try to understand the data
    3. Request missing information or changes
    4. Augment the submission with metadata
    5. Transform file format for reuse and long-term preservation
    6. Evaluate and rate the overall submission using FAIR 

Comments

Comments powered by Disqus