Monthly Archives: September 2014

Final Project Proposal – Elizabeth Frank

Name: Elizabeth Frank
My group has chosen this topic: Linked Open Data
My interest in this topic: Linked Open Data involves a methodology of sharing and structuring data to make them useful for everyone on the world wide web. If we live in the information society, as has been said, then this is where the information is, and through linked open data, everyone can access it. Tim Berners-Lee said in his TED talk:

  • All kinds of conceptual things, they have names now that start with HTTP.
  • If I take one of these HTTP names and I look it up [..] I will get back some data in a standard format which is kind of useful data that somebody might like to know about that thing, about that event.
  • When I get back that information it’s not just got somebody’s height and weight and when they were born, it’s got relationships. And when it has relationships, whenever it expresses a relationship then the other thing that it’s related to is given one of those names that starts with HTTP.
  • The “How things are related” angle is what compelled me to choose this topic.

    Some initial resources

    Eichenlaub, N. (2013). Checking in with google Books, hathitrust, and the dpla. Computers in Libraries, 33(9), 4-9.

    Florian, B., Martin, K. & Florian, B. (R. & Martin, K. (S. W. C. (eds.) (2012). Linked Open Data: The Essentials – A Quick Start Guide for Decision Makers. edition mono/monochrom, Vienna, Austria.

    Howard, J. (2013). Digital library of america, 7-month-old superaggregator. The Chronicle of Higher Education. Retrieved from

    Mitchell, E. T. (2013). Building blocks of linked open data in libraries. Library Technology Reports, 49(5), 11-25, 2.

    Yi, E. (2012) Inside the Quest to Put the World’s Libraries Online, Esther Yi, The Atlantic, July 2012.

    Based on my preliminary research, I have chosen to focus on:
    I will write a paper on the Digital Public Library of America (DPLA) as an example of linked open data in action. The DPLA is (stated in its planning statement) “an open, distributed network of comprehensive online resources [drawn from] libraries, universities, archives and museums . . .” Specifically, I would like to focus on the trend of “Memory Projects” such as “Queens Memory” and how they use linked open source data, if indeed they do. At the Metro conference last January, I saw a presenter from Queens Memory showing artifacts from one man’s collection of memorabilia from the 1939 World’s Fair. If I find that they don’t, then I will focus on cultural heritage projects and, ideally, include a section on one case study.

    I have a lifelong interest in history, and believe that this project ties in with one I did in my Information Professions class regarding corporate archivists. How are historical artifacts handled when not handled by historians? We all have a need to know where we came from; hence, the recent enthusiasm in tracing genealogical roots that have sparked several television shows.

    I’m uncertain about the unresolved questions regarding my topic, although I imagine it will be “how to catalogue the internet” as we read in this week’s reading, how to engage (and control) the contributions of “citizen librarians” (such as the man who contributed his World’s Fair Memorabilia to Queens Memory) and, as always, copyright. Can DPLA remain as altruistic as its mission statement? Where is the money coming from? Will funding affect integrity? Can momentum be maintained?


    Trust the Force – Weekly Response Post #4, Elizabeth Frank

    I found the tone in “Resource Description and Access” by Coyle and Hillman to be laced with an almost adolescent level of grumpiness as they fussed at some unnamed foe (“the way things are now,” perhaps?) for their “legacy approaches,” “unexamined assumptions,” and methods which “support backwards compatibility rather than forward thinking.” They assert that “current methods are not sufficient.” I am presuming “current” refers to 2006, when the article was presumably written (if not earlier) as it was published early in 2007. Current cataloging methods, authors Coyle and Hillman assert, were “not suitable for resources that existed in a state of constant change.”

    Maybe not but, to quote Darth Vader, “I find your lack of faith disturbing.”

    Cataloguing is adaptive. That’s why there are so many forms of it for various institutions. If AACR2 (which seems to have hurt the authors very badly, maybe something happened freshman year) isn’t suitable for the Internet, someone else will be developed which is. Relax, it’s only 2006.
    The authors later assert “A complex metadata surrogate describing resources in detail is unneeded when the actual item can be viewed within a few seconds and with little effort on the part of the user.” Which would be true if “viewing” and “access” were the only point of cataloguing.

    Moving on to “If It’s Televised, It Can’t Be the Revolution,” Tennant (who seems to feel about MARC as Coyle and Hillman do about AACR2) is discouraged about the lack of “shared clarity” he found at a recent meeting of the National Information Standards Organization, which failed to define a “future bibliographic information ecosystem.”

    “The New User Environment: The End of Technical Services,” rather than despairing that current cataloging methods are not sufficient, at least outlines a framework for one which would be. It’s interesting to me that he quoted from Weinberger’s Everything is Miscellaneous, because Weinberger authored an article on Linked Open Data, which is my group project and which may or may not be the solution to all this fuss.

    Weekly Response Post 4 – Eugene Rutigliano

    Bradford Lee Eden’s concluding recommendations to technical services in the Information Technology and Libraries reading strike me as simplistic. His first two points advise that libraries become more efficient and eliminate their backlogs, good courses of action that have probably occurred to most administrators at some point. His third recommendation is to contract with vendors for cataloging services “as much as possible.” This would free up resources in the technical services department to tackle the issues of relevancy and modernization, but it remains unclear how seeking out new contracts would help libraries negotiate dwindling budgets. The fourth point is that technical services staff should be adapted to changing technologies and information needs of library users. Vigilance when facing the rapid changes to the access paradigm should be a priority, but I think the either/or dynamic Eden sets up between traditionalists and progressives in information science is counterproductive. I find it hard to believe any dedicated librarian simply refuses to see how the rise of global searching could make the collections he stewards more useful. The trouble is that learning and teaching new “interoperable” metadata standards is probably a matter of triage for many non-commercial information institutions. And while post-MARC standards have big potential, they carry their own sets of issues, WorldCat 2.0 included. Shifting towards these standards may involve losing more than just antiquated OPACs.

    Also, What does it say when an advocate of adopting new standards for library interoperability considers “noncommercialization” a conservative ideal?

    Final Project Proposal — Diana Rosenthal

    Name: Diana Rosenthal
    My group’s topic: Linked Open Data
    My interest in this topic: My interest in open data was piqued by a course I took over the summer called Institute on Map Collections. Through the class, my professor showed us the interesting work being done by the New York Public Library that encourages the open exchange of information, crowdsourcing, and making data available in multiple formats to foster dissemination. Though the NYPL Map Warper and sites like OpenStreetMap are more specifically about crowdsourcing, the concept of free access really stood out to me. I am interested in the information science aspect behind making raw data available, readable by both people and machines, and useful across multiple platforms.

    Resources that may be useful researching this topic:
    Berners-Lee, T. (2006). Linked data. Retrieved from

    Coyle, K. (2012). Linked data tools: Connecting on the Web. Chicago, IL: ALA TechSource.

    Harlow, C. (2014). What is linked data and why do I care? Proceedings from NYC Archives Unconference. New York, NY.

    Miller, E., & Swick, R. (2003). An overview of W3C Semantic Web activity. Bulletin of the American Society of Information Science and Technology 29 (April/May), 8–11.

    O’Hara, K., & Hall, W. (2011). Semantic Web. In Encyclopedia of Library and Information Sciences, (Third Edition, Published online: 29 August 2011; 4663–4676). New York: Taylor and Francis.

    Wilbanks, J. (2006). Another reason for opening access. BMJ: British Medical Journal, 333(7582), 1306–1308.

    Individual focus for literature review: Though I am interested in the mechanics behind creating a standardized “web of data” that linked open data strives to achieve, I am more intrigued by the practical applications of achieving access to datasets. For this reason, I would like to examine the concepts of the Semantic Web and linked open data through the lens of a specific project: New York City Open Data. I’m hoping to determine a couple of things by looking closely at how New York City approaches linked open data. First, I’d like to evaluate the city’s general compliance with the standards established by the W3C (and laid out by Tim Berners-Lee in the reference listed above). Next, I’m interested in reviewing the types of datasets the city has made available and those that could hopefully be made public in the future (for example, restaurant inspection grades are currently available, but are there datasets for hospitals?). Third, I would like to look at the applications that have resulted from the availability of this information and evaluate their usefulness. I think the NYC Open Data project was created in an effort by the city government to be transparent, and I’m interested to see if the projects that utilize city data demonstrate openness and access. It could also be fascinating to compare the New York City initiative to the linked open data initiatives of other governments, foreign and domestic. Two questions I still have after my preliminary research are whether or not current linked open data projects do indeed follow the W3C’s Resource Description Framework and whether or not the RDF plan is easy enough to understand for linked data to take off among dedicated non-experts. Thus far, my research has been broad to provide myself with the foundation of information necessary to understanding linked open data. I think the NYC Open Data case study will help narrow my focus and expose me to more specific scholarship.

    Hannah’s response to Gilliland

    The readings all referred to the importance of metadata in terms of the extent and history of its use, but their descriptions of the different purposes of metadata made me wonder whether there are identifiable limits to the utility of metadata. The first passage that raised that question for me was Gilliland’s assertion in the “Preservation and persistence” subsection of the section “Why Is Metadata Important?”:

    If digital information objects that are currently being created are to have a chance of surviving migrations through successive generations of computer hardware and software, or removal to entirely new delivery systems, they will need to have metadata that enables them to exist independently of the system that is currently being used to store and retrieve them.

    My immediate reaction, as a consumer of information through various media of which the majority are digital, was to think about digital information objects that are created in a format that can only be interpreted by one company’s software. The consumers of those objects are not supposed to foresee needing to use a different system to store or view them, so even if the object is associated with metadata that would link it to more generally usable versions, it is often in the object creator’s interest to ensure that its unique metadata conventions are the most widely known.

    Rereading the “Expanding use” subsection, I thought of another way that metadata can (paradoxically?) restrict the use of an object. Even if the creator or holder of a digital information object claims an overarching goal of making the object available to new audiences, there is a significant chance he does not understand the needs of those audiences well enough to make the greatest possible use of their storage systems, while metadata creators who do understand the audience may not understand the original object well enough to identify it in the metadata for their version.

    Weekly Response #3 – Kyle Olmon

    Having been out of town last week and missing class, I was a bit apprehensive to tackle the readings on metadata along with the two Powerpoint decks. Metadata is a major component to the work of a majority of information science professions and I was hoping to get a good handle on the elusive practice of collecting “data on data”. Chowdhury was able to lay a solid foundation and he introduced the basic concepts in his chapter on metadata. The Gilliland text was helpful as it expanded on the definitions and practices of metadata collection and being a visual person I benefited greatly from the provided examples in the tables. Visualizing the different roles or components of the functions of metadata helped me understand how this information is collected and shared in various LIS and public environments. I began to see the struggle for a format that could be rigid enough to capture and codify information in a manner that best serves the object but flexible enough to anticipate new practices, and more importantly, to be able to be shared among a variety of users with vastly different expectations and interest in the information.

    Then I got to the Dublin Core User Guide and became completely lost. The visuals of the RDF graphs with URIs springing forward like copulating bunnies were no help in illustrating the concept of Linked Data. The more I poured over the RDF terms the more I was bogged down in not only the “data of the data”, but the data of the data of the data. I am hoping that today’s class will be able to provide some concrete examples of MARC and EAD records that we can dissect and discuss. Bring on the visuals!


    Maureen McElroy Response 3

    So I will be completely honest–was not quite sure what metadata was.

    I knew its was important– it has been mentioned in other classes and readings plus there is a whole class on it.  I was a little intimidated by her, Metadata, kind of like the girl that makes you feel inferior–she has done nothing to you but you always feel a little uneasy when her name is mentioned.  So I went into the reading thinking: I will finally understand her or hate her more.  So I am happy to report that I have a much better understanding of her (turns out she just had “resting bitch face”).

    The first line of Gilliland’s article: explains it all or at least makes easier for me to wrap head around it:  “data about data.”  I liked this article because it really broke things down in a simple way.  I do have some questions about museum and archivial metadata that I will ask about in class.

    As far as Dublin Core, I think I am going to have to see it in action or have it better explained.

    I like the stuff we are learning–MARC, Dublin Core, etc.–probably because I really like things to have a process and an order to it.

    Week 3- Metadata- Katherine K.

    To be honest without trying to sound pretentious, I knew metadata was an important factor in LIS. I use to it to catalog at my internship and I’m glad the readings backed up my findings about this. Overall, I think Chowdhury’s chapter on Metadata put it best since he explains it simply rather then going into too much detail. While the other articles were decent, they really made Metadata seemed more complicated then it really was.

    According to Chowdbury’s Chapter on Metadata (page 142), it helps:

    1) facilitate the proper description and cataloguing of information resources, especially electronic and web resources.

    2) Information retrieval: metadata facilitate information retrieval, and several subject gateways use metadata for resource discovery and information retrieval.

    3) Management of information resources: metadata are the main building blocks of information architecture and content management, two newly developed fields within information services that aim to organize information in more effective ways so that it can be retrieved by users easily and intuitively.

    4) Determination of document ownership and authenticity of digital resources: metadata store important information about electronic information resources that can tell users about ownership, provenance, special marks, etc, which can be very useful for resource discovery and management.

    5) Ensuring interoperability and data transfer between systems: metadata formats enable data transfer between systems.

    In conclusion, I’m glad overall Chowdhury put metadata simply and into bullet points for those who are just learning about the subject. He writes it very well without making it complicated, as metadata should be.

    Weekly response to reading #3 – Valerie Ramshur

    Setting the Stage, by Anne J. Gilliand.

    The phrase “data about data” is  vague – and has become an umbrella phrase for what is clearly a very in-depth way of organizing, categorizing and accessing information on “stuff” or “objects”.  I was unclear myself on the what Metadata actually meant before reading this weeks papers. Now I feel so overwhelmed with what is can be and what is should be.   I need to step back.

    What surprised me about this weeks reading was the amount of variety and range of activities involved in metadata. Metadata being more that mere description of “objects”. The collection of all types of Metadata and their functions (see table 2 on page 7) was particular helpful. Putting a visual guide in place allows me to reference and grasp the distinct categories involved.

    Additionally the ‘little known facts” also was quite useful and a clear way to wrap my head around this topic. I hadn’t thought before on all the various ways creating Meta data would be used across so many fields and in so many different institutions. With our increase use of the digital and online accessibility this data and cross communication through Metadata will prove to be the key to our understanding of “our cultural history” indeed.