Weekly Response Alana MohamedWright’s description of Paul Otlet’s system paints him as an innovative mind who intuited the power of linked data in a social space. Rayward’s article provides even more context of his system by discussing the implicit biases and problems with the often utilized Dewey Decimal System. The first idea of Otlet’s that was impressive to me was his idea of creating a social space for information. While Dewey’s system places information in hierarchies, Otlet’s UCD system outlines multiple horizontal relationships between information. I was surprised at how easily I understood the UCD system, though I certainly can imagine there would be some conflict deciding if something what a “philosophy” or a “science.” I suppose that’s the point of these articles, to point to the ways knowledge is constantly in flux. Organizing knowledge seems to mean constantly changing, or at the very least, operating within a classification system that allows for change. Wright’s description of Paul Otlet’s system paints him as an innovative mind who intuited the power of linked data in a social space. Rayward’s article provides even more context of his system by discussing the implicit biases and problems with the often utilized Dewey Decimal System. The first idea of Otlet’s that was impressive to me was his idea of creating a social space for information. While Dewey’s system places information in hierarchies, Otlet’s UCD system outlines multiple horizontal relationships between information. I was surprised at how easily I understood the UCD system, though I certainly can imagine there would be some conflict deciding if something what a “philosophy” or a “science.” I suppose that’s the point of these articles, to point to the ways knowledge is constantly in flux. Organizing knowledge seems to mean constantly changing, or at the very least, operating within a classification system that allows for change.
I had mixed feelings while reading Heidorn’s piece. I certainly think that, in this new age of digital data production, there may be no better place than an academic library to store new data. However, I also worry about the financial burden this places on academic libraries. Learning to maintain data could be a useful tool in helping to figure out many other struggles librarians are now facing regarding digital material, but unless the library is well funded by nonpartisan money, the data stored is susceptible to corruption. What I mean is this: Libraries are already strained to digitize their collections, find new (and cheap) ways to catalog, create more accessible interfaces for their users, and update both digital and physical collections. Librarianship seems to be at a bunch of crossroads that are both mind boggling and expensive. With data curation, librarians face a new challenge that is decidedly less familiar. As a novice librarian, I can’t really say how well adapted the average academic librarian is to deal with these new obstacles, but as a writer and reader, it seems clear to me that Heidorn skims over these difficulties. For example, when he talks about grant writing, he isn’t clear as to how exactly librarianship can help researchers come up with data management plans. Of course I have a general ideal of how this could be done and when he says, “Libraries have concerned themselves with digital object access and preservation since the beginning of the information revolution, so library staff who understand the underlying concepts are already well positioned to assist,” he is essentially saying the same thing. I suspect that it will be a much more difficult process than Heidorn makes it out to be.
His point is that it will be difficult, but necessary. But I think it’s important to confront those difficulties because without it, the only incentive to take on the burden of data management is funding. Any researcher knows you have to be critical of your source: who conducts the research, how it is conducted, and who funds it. But even so, large companies are still able to put out intellectually dishonest material, which is often gobbled up quickly by media and made even more unreliable with scare-tactic headlines and faulty reporting. I worry that libraries will fall into a similar trap, taking funding from companies or government agencies with the most money and most interest in pushing a political agenda. If libraries aren’t critical about where the funding for data management comes from, incorrect or dishonest data may be overrepresented. Which is not to say that academic libraries don’t have their own agendas and political leanings to begin with, but it seems to me that libraries are in a particularly vulnerable position right now and extremely susceptible to this kind of corruption.
A few weeks ago, I remember discussing how difficult it must be to get a computer to take input and process it like a human, reading into culturally understood connotations, subtexts, and colloquialisms. When I was reading the Chowdhury chapter and Thomas Mann’s piece, I began to think that subject headings was the solution. It did seem exhaustive, thinking through subjects that to us seem so obviously connected. That’s why I really appreciated the Taylor chapter on subject headings and how to determine the ‘aboutness’ of a resource. It brought up many of the issues with determining subject headings that I foresaw when reading the Chowdhury chapter, but couldn’t vocalize, as well as pointing out others I had not yet thought of.
I couldn’t help but think of our radical cataloging session with Jenna and Emily last week. For example, when Taylor began to talk about neutrality, I remember Jenna and Emily discussing the politics of using the term ‘queer’ in cataloging. Each generation within the LGBTQA umbrella has their own identifiers, and on top of that preferred descriptors may vary from person to person. Jenna talked about how the word queer seems perfectly neutral for her to use when cataloging, but points out that an older gay colleague still finds it offensive.
Also, when Taylor talked about the lack of focus on ‘point of view’ when it came to describing aboutness, they say that this content characteristic is especially useful “for items that may be political in some fashion (e.g. political works, religious works, cultural treatises, works on sexuality, gender, age, socio-economic levels etc.)” which I thought was interesting in light of what Emily pointed out about searching for ‘women’ as opposed to ‘white women’ in the catalog. Suppose we didn’t prioritize point of view for only “political” works. Would that encourage us to reexamine what we mean by political? Of course, I am just philosophizing. At the end of the day, cataloging is just a job.
Taylor did a great job of getting to the intricacies of subject headings and ‘aboutness.’ I wonder now if determining subject headings should really be a one-person job. It seems that what’s needed is the expertise of the author, the understanding of their intended audience, and then feedback from a secondary audience, to truly understand what subject headings best apply. It seems a daunting task for one person to be that prescient about users needs.
I was especially excited to read Furner’s article on Dewey because I have recently been thinking of race and Dewey. I can’t remember if I wrote about this before but I was looking at Dewey subjects recently and realized that for European and American literature there were subdivisions such as “fiction” “non-fiction” “drama” “poetry”–vast categories that hinted at the variations and nuance within these regional literatures. However non-Western countries were hardly afforded the same privilege. Categorization was mostly based on location (“Africa”->”South African literature”) which was surprising to me. I know that there is a lot of bureaucracy surrounding change in classification but I never knew how slow things were to change!
I was particularly interested in Furner’s discussions of self identification and mixed race people, something I had never thought of before. I began to think about user-based tagging and classification. If we gave mixed race people to define information on their terms what would our classification system look like? I truly have little idea and know that there are a broad spectrum of people who consider themselves mixed race and a broad number of varying issues they face. I did think it was cool to imagine a new language in categorizing based on a new perspective. It makes me wonder about the diversity of catalogers and librarians. Would it make a difference if we sought out marginalized people to navigate and classify their own histories?
I was really grateful for the Tunkelang piece on facets. Last week, I think I ended up overthinking the concept of ontology v vocabulary based on its philosophical definition. It was helpful to me that Tunkelang drew the connection between the philosophical definition of ontology and its usage in LIS, and specified how a taxonomy related to an ontology. For some reason, Tunkelang’s piece on facets best conveyed the structure of a taxonomy, even though they used pretty much the same metaphor all of our other readings did (root, branch, leaf).
I found the readings about Ranganathan’s Colon Classification particularly interesting as well. Stackel’s description of Ranganathan’s issue with most classification systems was confusing to me at first. He said, “For Ranganathan, the problem with the Dewey Decimal and Library of Congress classification systems is that they used indexing terms that had to be thought out before the object being described could fit into the system.” I had a general understanding that these indexing terms were not intuitive, but I couldn’t really think of a solid example. However, reading Tunkelang’s piece made this idea more concrete to me, especially when they begin to talk about the way the DDC classifies literature on cats. From there I could somewhat better understand how Ranganathan’s Colon Classification allowed for “hospitality at many points.” I’m still not sure I understand it enough to criticize it, but the language Ranganathan uses to describe his theory was also impressive to me. By using words like “hospitality” and “extrapolation,” he seemed to look at information as dimensional and flexible beings instead of just books or objects. His language renders the idea of information occupying and traveling through space in a way similar to matter, which is an idea that the Loc and DDC systems don’t seem to echo. Instead of building a home or a space around this information, the LoC and DDC seem more invested in herding information into predetermined categories, which how we end up with books about cats being classified under technology.
I really enjoyed the Baker reading, but I got hung up on the section about, “Constraining data versus constraining underlying domain models and vocabularies.” I’m not quite understanding the difference and it may be due to a flaw in understanding how these models and vocabularies work. But, as of right now, I’m under the impression that data models and vocabularies all form the base of ontological investigation, so isn’t constraining either of those constraining “things in the real world” as Baker puts it? If we’re going to talk ontology and vocabulary, wouldn’t most philosophers say that any vocabulary inherently limits ontological expression as much as it serves it? So how can Baker put forth ISBD in RDF or the Murray-Tillett approach to FRBR as less limiting of “things in the real world” than OWL, but still limiting of “things in data?” Doesn’t data, or perception of data, alter our vision of “the real world?” I understand that disjointed entities inhibits a more associative, user-engaged process, but I still don’t understand why these other vocabularies are seen as less constricting when the very nature of language is one of constriction, or if you’d rather, expression within the confines of socially-agreed upon standards? I suspect that I’m getting too philosophical for my own good. I also suspect that I may be trying to apply my understanding of linguistics in philosophy too acutely to this reading and my (addmittedly poor) understanding of descriptive languages and cataloging.
As always, I am fascinated by the business side of libraries, so that’s probably why Bradford Lee Eden’s “The New User Environment: The End of Technical Services?” appealed to me so much. I feel like it was a good balance to Tennant’s sensationalism and bad taste. (“It was then that I remembered Gil Scott-Heron’s message to his black brothers and sisters back in the day.” …!!!!!) I was fascinated by Eden’s point that “more than 80 percent of information seekers begin their search on a Web search engine.” It shouldn’t surprise me. After all, isn’t that how I started research for my own final project proposal? I wonder, however, what percentage feel the need to refine their information with OPACs? It seems not enough as Eden goes on to quote Martha Bates, saying that users “even use information that they know to be of poor quality and less reliable—so long as it requires little effort to find—rather than using information they know to be of high quality and reliable, though harder to find.” This idea of framing web search engines as a competitor to OPACs is not a distinction I had fully made, for whatever reason. But the idea of humans v machine pops up again, when Eden talks about what to do with the seemingly devalued technical services staff. He proposes that they remain vital to the project by becoming metadata librarians and utilizing their knowledge of information organization along w interdisciplinary technical skills. This idea appealed to me because I began to think about humans v machines in the market. While we as humans can’t compete with machines in terms of speed or efficiency, we are typically more flexible and holistic. I agree with Eden that from this stand-point, libraries need to augment their current system and focus on the strengths they can bring to information seekers.
Name: Alana Mohamed
My group has chosen: Information Architecture in Libraries/Museums
I am interested in this topic because: I came in to LIS bc I was interested in how technology could democratize the accessibility of information. IA seems to be an innovative field that combines patron usability with technology to better serve and stimulate the patron.
- Fushimi, Kiyoka and Motoyama, Kiyofumi. “User-Centered Design: Improving Viewers’ Learning Opportunities in Art Museums in Japan.” The Journal of Museum Education 32.1 Spring 2007: 73-86. JSTOR.
- Garrigan, Shelley. “Displaced Patrimonies: Democratization and Virtual Museums in Latin America.” Revista Canadiense de Estudios Hispánicos 31.1 Autumn 2006: 161-174. JSTOR.
- Marty, Paul F. and Jones, Katherine. Museum Informatics: People, Information, and Technology in Museums. New York: Routledge, 2009.
- Ross, Parry. Recoding the Museum: Digital Heritage and the Technologies of Change. New York: Routledge, 2007.
Based on my preliminary research, I have chosen to focus on the following aspect of our group topic:
I wanted to focus on how museums can implement IA to better improve user experience and empower the patron to engage in self-directed learning. This idea was in part inspired by the Brooklyn Navy Yard’s museum, BLDG 92, which uses interactive digital interfaces to better engage the patron while increasing the educational merit of the museum. Of course, there are no scholarly articles on this relatively new museum, but my research has added a global context to my original interest, with care studies in Japan and Latin America. Parry Ross’ Recoding the Museum gives a general overview of museums changing in the digital age and provides some West-centric examples. My lit review is mainly focused on how IA’s concern with user-centered design, digital media, software and enterprise architecture can help to democratize information by engaging users in interactive digital learning that supplements a museum’s existing physical collection.
I’m interested in the interpersonal exchanges that go into the creation of metadata. It seems that all these standards need to be agreed upon in some way. I was quite interested in Chowdhury & Chowdhury’s point that there are “two distinct schools of thought that influence the development of metadata standards.” It seems to me that since one of the prime functions of metadata is to “facilitate information retrieval,” these standards must be carefully considered to allow the public the greatest access to information.
However, I’m not exactly sure I understand the distinction Chowdhury & Chowdhury make between the two groups yet. Structuralists seem more interested in less-regulated metadata that are, perhaps easier for public use but make more work for the cataloger. Minimalists seem more interested in convenience for the document authors and (it seems) computer programs that use this metadata. Or at least, that’s what I interpret Chowdhury’s use of the word “tools” to mean. Without dwelling on how badly I’ve fudged the distinction between the two camps, I was really fascinated by this idea that users are not only our human public, but our machines as well. Thinking of machines as distinct users with their own needs really pushes back against larger cultural conceptions of the computer as some automated overlord. But the truth is that computers follow their own logic and that logic has limits that a human user might not face. Gilliland talks about the specific needs of humans users when discussing user-created metadata:
Among the advantages of these approaches is that individual Web communities such as affinity groups or hobbyists may be able to create metadata that addresses their specific needs and vocabularies in ways that information professionals who apply metadata standards designed to cater to a wide range of audiences cannot.
This also points to the problem that, unlike computers, not all humans use similar logic. User-created content is individual-specific, which means it’s less generalizable, but more efficient for certain groups of people. Consider: how would low-income individuals talk about poverty and how would an economist? And which low income individuals? How old are they, where are they from, what is their race, nationality, gender? It’s clear that there are multiple languages surrounding poverty, but how does that compare to how the Library of Congress would categorize books about poverty? What use is free literature about poverty to an impoverished community if they cannot access it? I suppose the arguments could be made that A) that’s what librarians are for and B) they would need to, at some point, assimilate to mainstream language about poverty. Still, how do we create a system that is useful to both machines and humans? It seems that multiple standards would need to be used to increase accessibility to both.
Reading the LoC’s “What is a MARC Record and Why is it Important?” made me wonder about the bureaucratic side of librarianship. Roy Tennant touched on this in “MARC Must Die.” He talked about how libraries are “limited to the niche market of library vendors” and how using XML would make it easier and cheaper for vendors to “produce the products we require.” This idea of libraries as a business is one I haven’t considered much before, but how much are libraries spending on these vendors? And how much of our usage is dictated by these vendors? If it is hard and expensive for vendors to produce product, surely it is expensive for libraries? What do these services cost us? I was concerned with the homogenization of the organization of information when I was reading the LoC’s dated article. As we evolve from “floppy diskettes” to file transfer, information begins to travel faster. Is this too fast for the average librarian to keep track of? Tennant’s piece on granularity seems to say yes. It seems like within this niche market, a few vendors must have some kind of monopoly over the organization of catalog information and I wonder if, when pressed, libraries could be forced to choose a less-than-stellar product if it is “easier and cheaper” to obtain.