When considering Thomas Baker’s piece, and linked open data in general, I think the most important aspect is interoperability. I think linked open data at its core is trying to achieve this interoperability–between file types and descriptions and datasets–and the results can be incredibly useful in a number of disciplines. In the conclusion of his article, Baker says
The translation of library standards into RDF involves the separation of languages of description from the specific data formats into which they have for so long been embedded. When defined with “minimal ontological commitment,” languages of description lend themselves to the sort of creative adaptation that is inevitably a part of any human linguistic activity.
I’m curious about the practicality of translating library standards into RDF and the sort of training that librarians would need to comply with linked open data best practices and guidelines. I was intrigued by Baker’s discussion of DCMI and the Library of Congress working together to incorporate alignments into their published vocabularies. I think making such declarations on a large scale could eliminate much of the necessary expertise to achieve the standard of metadata needed to make datasets useful on the semantic web.