- Trying Twitter
- Cataloging Tool?
- Open Monograph Press 1.0
- Standards for Sharing
- Help Wanted - Science Programs
- Additions to Source Codes for Vocabularies, Rules, and Schemes Network Development and MARC Standards
- Danacode (D.A.N.A. Systems)
- UK Standard Library Categories (London: BIC)
- UK Standard Library Categories (London: BIC)
- UK Standard Library Categories (London: BIC)
- Peeps at the Library
- Tamashek Romanization
- IA Summit
- Mapping Dublin Core Terms to the PROV-O OWL2 Ontology
- A formal ontology for historical maps
- MARC Concise Formats 2012
- FRBR for Serials
- ISO Metadata Training
- OLAC Newsletter
- Recommended Practices for Online Supplemental Journal Article Materials Teleconference
- MARC at Midwinter
- Genre and Form Term Usage
- 10th Anniversary
- NISO Newsletter
- NISO Publishes Maintenance Revisions of Dublin Core and SUSHI Standards
- Additions to Source Codes for Vocabularies, Rules, and Schemes
- Klassifikasjonsskjema (Stavenger: Misjonshogskolen)
- ISBD Area 0 (Content Form and Media Area) isbdmedia - ISBD Area 0 (Content Form and Media Area)
- Letapis Druku Belarusi = Chronicle of the Press Belarus (Minsk: Natsyianalnaia kniznaia palata Belarusi)
- Free Your Metadata
- eXtensible Catalog Drupal Toolkit
- Problems with Library Catalogs
Ive been doing this weblog for over ten years. Its getting a bit old. Im going to try posting to https://twitter.com/Catalogablog">Twitter and see if that revives my interest.
For years Ive used MARC Magician to create bibliographic records. It is showing its age, updates have been few since 2005 and most of the newer fields and codes are missing. I can establish them myself, but it seems to me that is work the company should do for all their customers. I get the feeling the company is not interested in the software and are just letting it age into obsolescence. Is there an RDA ready tool out there? Something that has all the new MARC fields, shows examples and gives tips according to RDA? How about ITS BiblioFile? Anything else? Thanks
The Public Knowledge Project has announced the 1.0 release of Open Monograph Press.
The Public Knowledge Project (PKP) is very pleased to announce the 1.0 release of Open Monograph Press (OMP). OMP is an open source software platform for managing the editorial workflow required to see monographs, edited volumes, and scholarly editions through internal and external review, editing, cataloguing, production, and publication. OMP will operate, as well, as a press website with catalog, distribution, and sales capacities. OMP 1.0 improves upon the public beta released in September 2012 in a number of ways. It includes a number of stability bug fixes and enhancements, particularly to the production and distribution workflows, and creation of ONIX for Books metadata support. It also includes multilingual support for French, Greek, Brazilian Portuguese, and Spanish.
“Like”-able Content: Spread Your Message with Third-Party Metadata by Clinton Forry appears in the latest A List Apart. He looks at Twitter Cards and Facebook’s Open Graph protocol.
While implementing third-party metadata schemas will add to the content creation workload, that extra effort will provide a much better user experience across multiple platforms and devices, both current and upcoming. Crafting content in discrete chunks with an eye on universal application and flexibility is the way of the future.
Could anyone point me good statistics about science programming in libraries? Maybe some dissertations? Just not finding anything, but I dont have access to Dissertation Abstracts. Thanks.
The source codes listed below have been recently approved. The codes will be added to the applicable Source Codes for Vocabularies, Rules, and Schemes lists. See the specific source code lists for current usage in MARC fields and MODS/MADS elements. The codes should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables. Standard Identifier Source Codes The following source code has been added to the Standard Identifier Source Codes list for usage in appropriate fields and elements. Addition:
Easter is approaching, so it is Peeps season. Time to review Peep Research: A Study of Small Fluffy Creatures and Library Usage.
Although scientific and health research has been conducted on Peeps, most notably that appearing on the Peep Research website (see http://www.peepresearch.org), we have noted an absence of research focusing on the ability of Peeps themselves to actually do research. To address this lack, we invited a small group of Peeps to visit Staley Library at Millikin University during the week of March 17-21, 2003 so that we could more closely observe their research practices. This was determined to be an ideal week for the Peeps to visit the library, as Millikin University students were on spring break. The research that follows documents their visit to the library and provides some evaluative commentary on our assessment of Peeps and library usage.The Georgetown Public Library Online Tour also features Peeps.
A proposal for a Tamashek romanization table is available for review. Comments on this proposed romanization table are being accepted until June 20, 2013.
A List Apart is giving away a free ticket to the IA Summit in Baltimore April 5-7. A few of the talks: * Metadata in the Cross-Channel Ecosystem: Consistency, Context and Interoperability * Taxonomy for App Makers * Fringe IA: Understanding Complex Organizational, Data, & Technical Issues * Secrets of Audio Transcription in Improving UX Universally
Dublin Core to PROV Mapping, W3C Working Draft 12 March 2013 seeks comments.
This document describes a partial mapping from Dublin Core Terms [DCTERMS] to the PROV-O OWL2 ontology [PROV-O]. A substantial number of terms in the Dublin Core vocabulary provide information about the provenance of the resource. Translating these terms to PROV makes the contained provenance information explicit within a provenance chain. The mapping is expressed partly by direct RDFS/OWL mappings between properties and classes, which can be found here.
A formal ontology for historical maps by Eleni Gkadolou and Emmanuel Stefanakis was presented at the 26th International Cartographic Conference, August 25 –30, 2013,Dresden, Germany.
Historical maps are a major component of our scientific and cultural heritage collections. Apart from the aesthetic value of the artifacts, maps also deliver valuable historical and geographic information. In order to use the historical cartographic information effectively, the semantic documentation of maps becomes a necessity and ontologies are suggested to achieve this. This paper examines how the top level ontology CIDOC-CRM “handles” historical maps and presents a formal description of the “Carte de la nouvelle frontière Turco-Grecque”, a map attached to the Convention of Constantinople that set the borderlines between Greece and Ottoman Empire in 1881.
MARC Concise Formats (2012 Edition) are now available to download as PDFs. * Table of Contents * General Introduction * Bibliographic * Authority * Holdings * 2012 Changes
Announcement about using FRBR for journals.
Version 0.1 of PRESSoo, a conceptual model accounting for the bibliographic description of serials, now is available from the following address: https://listes.services.cnrs.fr/wws/d_read/ontologie-patrimoine/PRESSoo_01.pdf The intention, while drafting this document, was to fill in a gap acknowledged in the FRBR Final Report (section 1.3 "Areas for Further Study"):Certain aspects of the model merit more detailed examination. The identification and definition of attributes for various types of material could be extended through further review by experts and through user studies. In particular, the notion of “seriality” and the dynamic nature of entities recorded in digital formats merit further analysis.PRESSoo is defined as an extension of the FRBRoo model, which in turn is defined as an extension of the CIDOC CRM model as well as an object-oriented reformulation of the original FRBR entity-relationship model. PRESSoo was developed by a restrained working group gathering representatives for the ISSN International Centre and the National Library of France. This document is labelled version 0.1 because it still has to be reviewed by a larger community, most notably the international ISSN network, the FRBR/CIDOC CRM Harmonization Working Group, and the IFLA FRBR Review Group. Version 1.0 will only be attained once PRESSoo has been amended and validated by that larger community. Any comment/criticism/proposal welcome!
The Federal Geographic Data Committee (FGDC) as it transitions from FGDC to ISO geographic metadata has provided training for other federal agencies. Some of this has been recorded and the videos made available. * Intro to Metadata * ISO 101 * XML Basics * UML Basics * Tools * Creating ISO Metadata * Validation * Data Discovery ftp://ftp.ncddc.noaa.gov/pub/Metadata/Online_ISO_Training/Intro_to_ISO/recorded_sessions/
The March 2013 issue of the OLAC Newsletter is now available. In this issue: * Stay up-to-date with the ALA Midwinter conference by reading about what happened at OLAC, CAPC, MARBI and CC:DA meetings. * Meet the candidates running for OLAC office – we have three candidates for Vice President/President Elect and two for Treasurer/Membership Coordinator. * Learn about your fellow OLAC members in a brand-new column called In the spotlight. Column editor, Bojana Skarich will be profiling a different OLAC member in future issues and in this issue, she introduces herself to OLAC members. Read about Bojana and contact her if you would like to be featured or would like to nominate another OLAC member. * The always interesting Catalogers Judgment. The first thing I turn to and read in each issue. The membership fees have just been modified. Those of you living overseas now have lower membership rates.
NISO Teleconference news.
NISO will hold its monthly open teleconference this coming Monday, March 11th at 3:00 PM Eastern time. This month, we will be discussing the recently published NISO RP-15-2013, Recommended Practices for Online Supplemental Journal Article Materials (available at http://www.niso.org/apps/group_public/download.php/10055/RP-15-2013_Supplemental_Materials.pdf). This document was jointly sponsored and published by NFAIS, the National Federation for Advanced Information Services. Business Working group co-chair Marie McVeigh of Thomson Reuters and Technical Working Group co-chair Sasha Schwarzman of The Optical Society (OSA) will be participating on the call to describe the work and answer any questions. Supplemental materials are increasingly being added to journal articles, but until now there has been no recognized set of practices to guide in the selection, delivery, discovery, and preservation of these materials. To address this gap, NISO and NFAIS jointly sponsored an initiative to establish best practices that would provide guidance to publishers and authors for management of supplemental materials and would address related problems for librarians, abstracting and indexing services, and repository administrators. The Supplemental Materials project involved two teams working in tandem: one to address business practices and one to focus on technical issues. This new publication is the combined outcome of the two groups work. The call is free and anyone is welcome to participate. To join, simply dial 877-375-2160 and enter the code: 17800743#. All calls are held from 3-4 p.m. Eastern time.
The cover sheets for the proposals and discussion paper presented at the ALA 2013 Midwinter meetings of the MARC Advisory Committee have been updated with the results of the discussions. They are available at: Proposal 2013-01:Identifying Titles Related to the Entity Represented by the Authority Record in the MARC 21 Authority Format Proposal 2013-02: New Fields to Accommodate Authority Records for Medium of Performance Vocabulary for Music in the MARC 21 Authority Format Proposal 2013-03: Making Field 250 Repeatable in the MARC 21 Bibliographic Format Proposal 2013-04: Defining New Code for Score in Field 008/20 (Format of music) in the MARC 21 Bibliographic Format Proposal 2013-05: Defining New Field 385 for Audience Characteristics in the MARC 21 Bibliographic and Authority Formats Proposal 2013-06: Defining New Field 386 for Creator/Contributor Group Categorizations in the MARC 21 Bibliographic and Authority Formats Proposal 2013-07: Defining Encoding Elements to Record Chronological Categories and Dates of Works and Expressions in the MARC 21 Bibliographic and Authority Formats Discussion Paper 2013-DP01: Identifying Records from National Bibliographies in MARC 21 Bibliographic Format Discussion Paper 2013-DP02: Defining Subfields for Qualifiers to Standard Identifiers in the MARC 21 Bibliographic, Authority, and Holdings Formats Discussion Paper 2013-DP03: Defining a Control Subfield $7 in the Series Added Entry Fields, for the Type and the Bibliographic Level of the Related Bibliographic Record Discussion Paper 2013-DP04: Separating the Type of Related Entity from the RDA Relationship Designator in MARC 21 Bibliographic Format Linking Entry Fields
Roy Tennant has posted a list of $a and $2 combinations for the 655 field. It is interesting to see some terms with high counts that are tagged local. Seems like those should be considered for being incorporated into an existing vocabulary. Electronic books with 37,605 uses seems a good candidate. Dissertations with 16,733 hits is another one to consider.
The other day Catalogablog turned 10. The first post dealt with RSS, which I guess was new back then. One of the two links is still valid.
In other NISO news, the latest NISO Newsline has been published. Topics include: * ISO 25964-2:2013, Information and documentation – Thesauri and interoperability with other vocabularies – Part 2: Interoperability with other vocabularies * EDItEUR, Updated FAQ on eBooks and ONIX * ISO/IEC 11179-3:2013 , Information technology – Metadata Registries (MDR) – Part 3: Registry metamodel and basic attribute * ISO/IEC 17963:2013, Web Services for Management (WS-Management) Specification * OASIS, searchRetrieve version 1.0 There is also a survey request.
NISO is a membership organization that must be responsive to community needs and interests. As an organization with limited resources, it must also prioritize the many strands of activity that are taking place, to ensure we are working toward goals which will have the greatest impact. To help prioritize our work, the NISO Architecture Committee is identifying the important technologies and trends that face our community. As part of this process, we would like the NISO membership to complete an online survey related to potential NISO directions and activities.And a reminder that comments on the ResourceSync Framework Specification for the web "detailing various capabilities that a server can implement to allow third-party systems to remain synchronized with its evolving resources" are due by March 15.
The latest news from NISO.
The National Information Standards Organization (NISO) announces the publication of maintenance revisions of two widely used standards: The Dublin Core Metadata Element Set (ANSI/NISO Z39.85-2012) and The Standardized Usage Statistics Harvesting Initiative (SUSHI) Protocol (ANSI/NISO Z39.93-2013). Both standards were revised to make very minor updates. The Dublin Core standard defines fifteen metadata elements for resource description in a cross-disciplinary information environment and is used as the basis for most metadata standards in use today. The SUSHI Protocol defines an automated request and response model for the harvesting of electronic resource usage data and is required for conformance with the COUNTER Code of Practice. "The DCMI Usage Board approved a change to the usage comment for the subject element to eliminate some ambiguity with the coverage element," explains Thomas Baker, Chief Information Officer for the Dublin Core Metadata Initiative, the maintenance agency for the Dublin Core standard. "The new version of the ANSI/NISO standard corresponds to version 1.1 of the specification on the DCMI website." "The SUSHI Standing Committee initiated this revision of the standard to make two minor updates," states Oliver Pesch, Chief Strategist for EBSCO Information Services and Co-chair of the SUSHI Standing Committee. "An additional error code was added and the appendix about security considerations was updated to reflect technology changes and experience gained since the initial implementation of the SUSHI protocol." "Standards do not drop into a black hole once they are published," states Todd Carpenter, NISO Executive Director. "They must be supported and regularly reviewed to ensure they are kept up-to-date. Both the Dublin Core and the SUSHI standard receive ongoing oversight from their respective Maintenance Agency and Standing Committee. The maintenance revisions just published are examples of how the standards are revised to address even minor issues found during implementation." Both standards are available for free download from the NISO website; Dublin Core at www.niso.org/standards/z39-85-2012 and SUSHI at www.niso.org/standards/z39-93-2013/. Additional information on the use of the Dublin Core standard is available from the DCMI website at www.dublincore.org. SUSHI FAQs, schemas, and implementation information are available at www.niso.org/workrooms/sushi.>
The source codes listed below have been recently approved. The codes will be added to the applicable Source Codes for Vocabularies, Rules, and Schemes lists. See the specific source code lists for current usage in MARC fields and MODS/MADS elements. The codes should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables. Classification Scheme Source Codes The following source code has been added to the Classification Scheme Source Codes list for usage in appropriate fields and elements. Addition:
Free Your Metadata is a site that describes using Google Refine and some extensions to clean and reconcile metadata, and automate the creation of personal, corporate and geographic names.
Clean up Clean up your metadata and discover how to handle those embarrassing errors. Reconcile Match your metadata with controlled vocabularies connected to the Linked Data cloud. Entity extraction Even unstructured fields can provide meaning thanks to named entity extraction. Sustainable access Once your metadata is in shape, it is ready to be published in a sustainable way.
News from the eXtensible Catalog project.
I happily announce, that after several months of development the eXtensible Catalog Drupal Toolkit 1.3 is just released. The eXtensible Catalog Drupal Toolkit is the front end of eXtensible Catalog (XC) built on Drupal content management system. It contains a set of 25 Drupal modules, a custom theme, and installation profile, and a customized Apache Solr search engine. XC is a discovery interface built on FRBR and RDA-like metadata structure. The release has a primary focus on data integrity, namely being able to successfully process record updates on a schedule basis. This includes new additions, updates and deletions of records. This release includes some Solr integrity fixes submitted by Kyushu University. The installation process for release 1.3 has also been reworked to include an implementation option using Drush that makes the installation substantially easier. If you have drush, the whole installation is only 4 steps. We also created a custom Solr package which is pre-configured to the needs of the Drupal Toolkit. You can find the installation instructions and release notes here: http://drupal.org/project/xc_installation. I hope you will find it useful. Now we are working hard on creating the first stable release of the Drupal 7 version. Any comments, suggestion and feedback are more than useful. You can find all the projects issue tracker here: http://extensiblecatalog.lib.rochester.edu:8080/browse/DRUPAL. The eXtensible Catalog projects website is available at http://eXtensibleCatalog.org
Catalog Matters Podcast no. 18: Problems with Library Catalogs by James Weinheimer is available.
In the last episode, I provided some examples of how people want to manipulate data instead of plowing their way through masses of printed text but I went on to express my doubts that the information in catalog records is actually the type of information that most people want to manipulate. I would like to continue that discussion.