Skip to Content

Instrukcja korzystania z Biblioteki

Serwisy:

Ukryty Internet | Wyszukiwarki specjalistyczne tekstów i źródeł naukowych | Translatory online | Encyklopedie i słowniki online

Translator:

Kosmos
Astronomia Astrofizyka
Inne

Kultura
Sztuka dawna i współczesna, muzea i kolekcje

Metoda
Metodologia nauk, Matematyka, Filozofia, Miary i wagi, Pomiary

Materia
Substancje, reakcje, energia
Fizyka, chemia i inżynieria materiałowa

Człowiek
Antropologia kulturowa Socjologia Psychologia Zdrowie i medycyna

Wizje
Przewidywania Kosmologia Religie Ideologia Polityka

Ziemia
Geologia, geofizyka, geochemia, środowisko przyrodnicze

Życie
Biologia, biologia molekularna i genetyka

Cyberprzestrzeń
Technologia cyberprzestrzeni, cyberkultura, media i komunikacja

Działalność
Wiadomości | Gospodarka, biznes, zarządzanie, ekonomia

Technologie
Budownictwo, energetyka, transport, wytwarzanie, technologie informacyjne

The Code4Lib Journal

Reflections on the occasion of the 25th issue of the Code4Lib Journal: sustaining a community for support, inspiration, and collaboration at the intersection of libraries and information technology.

http://feedproxy.google.com/~r/c4lj/~3/acPedsepBTE/9987 2014/07/22 - 12:43

Libraries regularly pay for packages of e-resources containing hundreds to thousands of individual titles. Ideally, library patrons could access the full content of all titles in such packages. In reality, library staff and patrons inevitably stumble across inaccessible titles, but no library has the resources to manually verify full access to all titles, and basic URL checkers cannot check for access. This article describes the E-Resource Access Checker—a script that automates the verification of full access. With the Access Checker, library staff can identify all inaccessible titles in a package and bring these problems to content providers’ attention to ensure we get what we pay for.

http://feedproxy.google.com/~r/c4lj/~3/VSU2oyUWNRg/9684 2014/07/22 - 12:43

Libraries have long relied on OCLC’s WorldCat database as a way to cooperatively share bibliographic data and declare library holdings to support interlibrary loan services. As curator, OCLC has traditionally mediated all interactions with the WorldCat database through their various cataloging clients to control access to the information. As more and more libraries look for new ways to interact with their data and streamline metadata operations and workflows, these clients have become bottlenecks and an inhibitor of library innovation. To address some of these concerns, in early 2013 OCLC announced the release of a set of application programming interfaces (APIs) supporting read and write access to the WorldCat database. These APIs offer libraries their first opportunity to develop new services and workflows that directly interact with the WorldCat database, and provide opportunities for catalogers to begin redefining how they work with OCLC and their data.

http://feedproxy.google.com/~r/c4lj/~3/kWWBmwKrQ8A/9863 2014/07/22 - 12:43

Docker is a relatively new method of virtualization available natively for 64-bit Linux. Compared to more traditional virtualization techniques, Docker is lighter on system resources, offers a git-like system of commits and tags, and can be scaled from your laptop to the cloud.

http://feedproxy.google.com/~r/c4lj/~3/ZttZbUYwSUE/9669 2014/07/22 - 12:43

We introduce a metadata schema that focuses on GIS discovery use cases for patrons in a research library setting. Text search, faceted refinement, and spatial search and relevancy are among GeoBlacklight's primary use cases for federated geospatial holdings. The schema supports a variety of GIS data types and enables contextual, collection-oriented discovery applications as well as traditional portal applications. One key limitation of GIS resource discovery is the general lack of normative metadata practices, which has led to a proliferation of metadata schemas and duplicate records. The ISO 19115/19139 and FGDC standards specify metadata formats, but are intricate, lengthy, and not focused on discovery. Moreover, they require sophisticated authoring environments and cataloging expertise. Geographic metadata standards target preservation and quality measure use cases, but they do not provide for simple inter-institutional sharing of metadata for discovery use cases. To this end, our schema reuses elements from Dublin Core and GeoRSS to leverage their normative semantics, community best practices, open-source software implementations, and extensive examples already deployed in discovery contexts such as web search and mapping. Finally, we discuss a Solr implementation of the schema using a "geo" extension to MODS.

http://feedproxy.google.com/~r/c4lj/~3/W9iCRYyJQ1A/9710 2014/07/22 - 12:43

The Community Cookbook project began with wondering how to take local cookbooks in the library’s collection and create a recipe database. The final website is both a recipe website and collection of ebook versions of local cookbooks. This article will discuss the use of open source software at every stage in the project, which proves that an open source publishing model is possible for any library.

http://feedproxy.google.com/~r/c4lj/~3/XodDzTMn-ek/9911 2014/07/22 - 12:43

The provincial library of West-Vlaanderen (Belgium) is digitizing a large part of its iconographic collection. Due to various (technical and financial) reasons no specialist software was used. FastScan is a set of VBS-scripts that was developed by the author using off-the-shelf software that was either included in MS Windows (XP & 7) or already installed (imageMagick, Irfanview, littlecms, exiv2). This scripting package has increased the digitization efforts immensely. The article will show what software was used, the problems that occurred and how they were scripted together.

http://feedproxy.google.com/~r/c4lj/~3/Qm-hfxnzuQc/9780 2014/07/22 - 12:43

Digital forensics tools have many potential applications in the curation of digital materials in libraries, archives and museums (LAMs). Open source digital forensics tools can help LAM professionals to extract digital contents from born-digital media and make more informed preservation decisions. Many of these tools have ways to display the metadata of the digital media, but few provide file-level access without having to mount the device or use complex command-line utilities. This paper describes a project to develop software that supports access to the contents of digital media without having to mount or download the entire image. The work examines two approaches in creating this tool: First, a graphical user interface running on a local machine. Second, a web-based application running in web browser. The project incorporates existing open source forensics tools and libraries including The Sleuth Kit and libewf along with the Flask web application framework and custom Python scripts to generate web pages supporting disk image browsing.

http://feedproxy.google.com/~r/c4lj/~3/7c8VW-rSojU/9773 2014/07/22 - 12:43

While there is a vast amount of useful US government data on the web, some of it is in a raw state that is not readily accessible to the average user. Data librarians can improve accessibility and usability for their patrons by processing data to create subsets of local interest and by appending geographic identifiers to help users select and aggregate data. This case study illustrates how census geography crosswalks, Python, and OpenRefine were used to create spreadsheets of non-profit organizations in New York City from the IRS Tax-Exempt Organization Masterfile. This paper illustrates the utility of Python for data librarians and should be particularly insightful for those who work with address-based data.

http://feedproxy.google.com/~r/c4lj/~3/AaBU994XLro/9652 2014/07/22 - 12:43

Fast retrieval of digital content has become mandatory for library and archive information systems. Many software applications have emerged to handle the indexing of digital content, from low-level ones such Apache Lucene, to more RESTful and web-services-ready ones such Apache Solr and ElasticSearch. Solr’s popularity among library software developers makes it the “de-facto” standard software for indexing digital content. For content (full-text content or bibliographic description) already stored inside a relational DBMS such as MariaDB (a fork of MySQL) or PostgreSQL, Sphinx Search Server (Sphinx) is a suitable alternative. This article will cover an introduction on how to use Sphinx with MariaDB databases to index database content as well as some examples of Sphinx API usage.

http://feedproxy.google.com/~r/c4lj/~3/eS0EFylW2HU/9793 2014/07/22 - 12:43

Previous articles in the Code4Lib Journal touch on the capabilities of FFMPEG in great detail, and given these excellent introductions, the purpose of this article is to tackle some of the common problems users might face, dissecting more complicated commands and suggesting their possible uses.

http://feedproxy.google.com/~r/c4lj/~3/9NWMw9nRngs/9856 2014/07/22 - 12:43

In March 2013, the University of Illinois at Urbana-Champaign Library adopted a policy to more closely integrate the HathiTrust Digital Library into its own infrastructure for digital collections. Specifically, the Library decided that the HathiTrust Digital Library would serve as a trusted repository for many of the library’s digitized book collections, a strategy that favors relying on HathiTrust over locally managed access solutions whenever this is feasible. This article details the thinking behind this policy, as well as the challenges of its implementation, focusing primarily on technical solutions for “remediating” hundreds of thousands of image files to bring them in line with HathiTrust’s strict specifications for deposit. This involved implementing HTFeed, a Perl 5 application developed at the University of Michigan for packaging content for ingest into Hathi Trust, and its many helper applications (JHOVE to detect metadata problems, Exiftool to detect metadata issues and repair missing image metadata, and Kakadu to create JP2000 files), as well as a file format conversion process using ImageMagick. Today, Illinois has over 1600 locally managed volumes queued for ingest, and has submitted over 2300 publicly available titles to the HathiTrust Digital Library.

http://feedproxy.google.com/~r/c4lj/~3/m6Zk6NL4Sqo/9703 2014/07/05 - 02:50

Making the Journal the best that it can be.

http://feedproxy.google.com/~r/c4lj/~3/YBGISTHz-OQ/9345 2014/04/17 - 19:24

Comprehensive social search on the Internet remains an unsolved problem. Social networking sites tend to be isolated from each other, and the information they contain is often not fully searchable outside the confines of the site. EgoSystem, developed at Los Alamos National Laboratories (LANL), explores the problems associated with automated discovery of public online identities for people, and the aggregation of the social, institution, conceptual, and artifact data connected to these identities. EgoSystem starts with basic demographic information about former employees and uses that information to locate person identities in various popular online systems. Once identified, their respective social networks, institutional affiliations, artifacts, and associated concepts are retrieved and linked into a graph containing other found identities. This graph is stored in a Titan graph database and can be explored using the Gremlin graph query/traversal language and with the EgoSystem Web interface.

http://feedproxy.google.com/~r/c4lj/~3/fDgut2DqMcM/9519 2014/04/17 - 19:24

This article describes how the University of North Texas Libraries' Digital Projects Unit used simple, freely-available APIs to add place names to metadata records for over 8,000 maps in two digital collections. These textual place names enable users to easily find maps by place name and to find other maps that feature the same place, thus increasing the accessibility and usage of the collections. This project demonstrates how targeted large-scale, automated metadata enhancement can have a significant impact with a relatively small commitment of time and staff resources.

http://feedproxy.google.com/~r/c4lj/~3/FEJoIMc7-Jo/9415 2014/04/17 - 19:24

In late 2012, OSU Libraries and Press partnered with Maria’s Libraries, an NGO in Rural Kenya, to provide users the ability to crowdsource translations of folk tales and existing children's books into a variety of African languages, sub-languages, and dialects. Together, these two organizations have been creating a mobile optimized platform using open source libraries such as Wink Toolkit (a library which provides mobile-friendly interaction from a website) and Globalize3 to allow for multiple translations of database entries in a Ruby on Rails application. Research regarding successes of similar tools has been utilized in providing a consistent user interface. The OSU Libraries & Press team delivered a proof-of-concept tool that has the opportunity to promote technology exploration, improve early childhood literacy, change the way we approach foreign language learning, and to provide opportunities for cost-effective, multi-language publishing.

http://feedproxy.google.com/~r/c4lj/~3/oRilvcdqacY/9496 2014/04/17 - 19:24

In this article, we present a case study of how the main publishing format of an Open Access journal was changed from PDF to EPUB by designing a new workflow using JATS as the basic XML source format. We state the reasons and discuss advantages for doing this, how we did it, and the costs of changing an established Microsoft Word workflow. As an example, we use one typical sociology article with tables, illustrations and references. We then follow the article from JATS markup through different transformations resulting in XHTML, EPUB and MOBI versions. In the end, we put everything together in an automated XProc pipeline. The process has been developed on free and open source tools, and we describe and evaluate these tools in the article. The workflow is suitable for non-professional publishers, and all code is attached and free for reuse by others.

http://feedproxy.google.com/~r/c4lj/~3/qP_pIGrvDPo/9462 2014/04/17 - 19:24

The Valley Library at Oregon State University Libraries & Press supports access to technology by lending laptops and e-readers. As a newcomer to tablet lending, The Valley Library chose to implement its service using Google Nexus tablets and an open source custom firmware solution, CyanogenMod, a free, community-built Android distribution. They created a custom build of CyanogenMod featuring wireless updates, website shortcuts, and the ability to quickly and easily wipe devices between patron uses. This article shares code that simplifies Android tablet maintenance and addresses Android application licensing issues for shared devices.

http://feedproxy.google.com/~r/c4lj/~3/482ghzvOQv0/9482 2014/04/17 - 19:24

As the archival horizon moves forward, optical media will become increasingly significant and prevalent in collections. This paper sets out to provide a broad overview of optical media in the context of archival migration. We begin by introducing the logical structure of compact discs, providing the context and language necessary to discuss the medium. The article then explores the most common data formats for optical media: Compact Disc Digital Audio, ISO 9660, the Joliet and HFS extensions, and the Universal Data Format (with an eye towards DVD-Video). Each format is viewed in the context of preservation needs and what archivists need to be aware of when handling said formats. Following this, we discuss preservation workflows and concerns for successfully migrating data away from optical media, as well as directions for future research.

http://feedproxy.google.com/~r/c4lj/~3/Qj6eAqxDQs8/9581 2014/04/17 - 19:24

Digital signage has been used in the commercial sector for decades. As display and networking technologies become more advanced and less expensive, it is surprisingly easy to implement a digital signage program at a minimal cost. In the fall of 2011, the University of Florida (UF), Health Sciences Center Library (HSCL) initiated the use of digital signage inside and outside its Gainesville, Florida facility. This article details UF HSCL’s use and evaluation of DigitalSignage.com signage software to organize and display its digital content.

http://feedproxy.google.com/~r/c4lj/~3/DiJzRIu8rYM/9407 2014/04/17 - 19:24

Hack your life with 10 New Year's resolutions from Code4Lib Journal.

http://feedproxy.google.com/~r/c4lj/~3/Ev2YhhuSr3M/9389 2014/01/18 - 20:42

With the recent surge in the mobile device market and an ever expanding patron base with increasingly divergent levels of technical ability, the University of Toronto Libraries embarked on the development of a new catalogue discovery layer to fit the needs of its diverse users. The result: a mobile-friendly, flexible and intuitive web application that brings the full power of a faceted library catalogue to users without compromising quality or performance, employing Responsive Web Design principles.

http://feedproxy.google.com/~r/c4lj/~3/h3m5kwCzcFs/9195 2014/01/18 - 20:42

Standards-based metadata in digital library collections are commonly less than standard. Limitations brought on by routine cataloging errors, sporadic use of authority and controlled vocabularies, and systems that cannot effectively handle text encoding lead to pervasive quality issues. This paper describes the use of Linked Data for enhancement and quality control of existing digital collections metadata. We provide practical recipes for transforming uncontrolled text values into semantically rich data, performing automated cleanup on hand-entered fields, and discovering new information from links between legacy metadata and external datasets.

http://feedproxy.google.com/~r/c4lj/~3/MniGhp9Q7DY/9214 2014/01/18 - 20:42

The University of North Texas (UNT) and the Oklahoma Historical Society (OHS) are collaborating to digitize, process, and make publicly available more than one million photographs from the Oklahoma Publishing Company’s historic photo archive. The project, started in 2013, is expected to span a year and a half and will result in digitized photographs and metadata available through The Gateway to Oklahoma History. The project team developed the workflow described in this article to meet the specific criterion that all of the metadata work occurs in two locations simultaneously.

http://feedproxy.google.com/~r/c4lj/~3/9vGDevFhI4w/9199 2014/01/18 - 20:42

This article discusses how the WSLS-TV News Digitization Project at the University of Virginia Libraries was the catalyst for creating a more formalized project workflow and the eventual creation of a Project Management Office. The project revealed the need for better coordination between various groups in the library and more transparent processes. By creating well documented policies and processes, the new project workflow clarified roles, improved communication, and created greater transparency. The new processes enabled staff to understand how decisions are made and resources allocated which allowed them to work more efficiently.

http://feedproxy.google.com/~r/c4lj/~3/QZFUhxU95Es/8652 2014/01/18 - 20:42

Audio digitization is becoming essential to many libraries. As more and more audio files are being digitally preserved, the workflows for handling those digital objects need to be examined to ensure efficiency. In some instances, files are being manually manipulated when it would be more efficient to manipulate them programmatically. This article describes a time-saving solution to the problem of how to split master audio files into sub-item tracks.

http://feedproxy.google.com/~r/c4lj/~3/846hoKWyYq8/9314 2014/01/18 - 20:42

A prototype Digital Video Library was developed as part of a project to assist rural primary care clinics with diagnosis of autism, funded by the National Network of Libraries of Medicine. The Digital Video Library takes play sample videos generated by a rural clinic and makes it available to experts at the Autism Spectrum Disorders (ASD) Clinic at The University of Alabama. The experts are able to annotate segments of the video using an integrated version of the Childhood Autism Ratings Scale-Second Edition Standard Version (CARS2). The Digital Video Library then extracts the annotated segments, and provides a robust search and browse feature. The videos can then be accessed by the subject's primary care physician. This article summarizes the development and features of the Digital Video Library.

http://feedproxy.google.com/~r/c4lj/~3/2tGr8n1HUJQ/9281 2014/01/18 - 20:42

The Unix environment offers librarians and archivists high-quality tools for quickly transforming born-digital and digitized assets, such as resizing videos, creating access copies of digitized photos, and making fair-use reproductions of audio recordings. These tools, such as ffmpeg, lame, sox, and ImageMagick, can apply one or more manipulations to digital assets without the need to manually process individual items, which can be error prone, time consuming, and tedious. This article will provide information on getting started in using the Unix environment to take advantage of these tools for batch processing.

http://feedproxy.google.com/~r/c4lj/~3/zNs7AcbBFxw/9158 2014/01/18 - 20:42

Audio and video content forms an integral, important and expanding part of the digital collections in libraries and archives world-wide. While these memory institutions are familiar and well-versed in the management of more conventional materials such as books, periodicals, ephemera and images, the handling of audio (e.g., oral history recordings) and video content (e.g., audio-visual recordings, broadcast content) requires additional toolkits. In particular, a robust and comprehensive tool that provides a programmable interface is indispensable when dealing with tens of thousands of hours of audio and video content.

FFmpeg is comprehensive and well-established open source software that is capable of the full-range of audio/video processing tasks (such as encode, decode, transcode, mux, demux, stream and filter). It is also capable of handling a wide-range of audio and video formats, a unique challenge in memory institutions. It comes with a command line interface, as well as a set of developer libraries that can be incorporated into applications.

http://feedproxy.google.com/~r/c4lj/~3/xwRrXK6SQ4w/9128 2014/01/18 - 20:42

This article presents a case study of a project, led by Wikipedians in Residence at OCLC and the British Library, to integrate authority data from the Virtual International Authority File (VIAF) with biographical Wikipedia articles. This linking of data represents an opportunity for libraries to present their traditionally siloed data, such as catalog and authority records, in more openly accessible web platforms. The project successfully added authority data to hundreds of thousands of articles on the English Wikipedia, and is poised to do so on the hundreds of other Wikipedias in other languages. Furthermore, the advent of Wikidata has created opportunities for further analysis and comparison of data from libraries and Wikipedia alike. This project, for example, has already led to insights into gender imbalance both on Wikipedia and in library authority work. We explore the possibility of similar efforts to link other library data, such as classification schemes, in Wikipedia.

http://feedproxy.google.com/~r/c4lj/~3/pYOo3Xb6DcA/8964 2013/10/24 - 14:20

The Remixing Archival Metadata Project (RAMP) is a lightweight web-based editing tool that is intended to let users do two things: (1) generate enhanced authority records for creators of archival collections and (2) publish the content of those records as Wikipedia pages. The RAMP editor can extract biographical and historical data from EAD finding aids to create new authority records for persons, corporate bodies, and families associated with archival and special collections (using the EAC-CPF format). It can then let users enhance those records with additional data from sources like VIAF and WorldCat Identities. Finally, it can transform those records into wiki markup so that users can edit them directly, merge them with any existing Wikipedia pages, and publish them to Wikipedia through its API.

http://feedproxy.google.com/~r/c4lj/~3/wllk6i8_k1Q/8962 2013/10/24 - 14:20

The ArchiveGrid discovery system is made up in part of an aggregation of EAD (Encoded Archival Description) encoded finding aids from hundreds of contributing institutions. In creating the ArchiveGrid discovery interface, the OCLC Research project team has long wrestled with what we can reasonably do with the large (120,000+) corpus of EAD documents. This paper presents an analysis of the EAD documents (the largest analysis of EAD documents to date). The analysis is paired with an evaluation of how well the documents support various aspects of online discovery. The paper also establishes a framework for thresholds of completeness and consistency to evaluate the results. We find that, while the EAD standard and encoding practices have not offered support for all aspects of online discovery, especially in a large and heterogeneous aggregation of EAD documents, current trends suggest that the evolution of the EAD standard and the shift from retrospective conversion to new shared tools for improved encoding hold real promise for the future.

http://feedproxy.google.com/~r/c4lj/~3/IMMc7KtkUN8/8956 2013/10/24 - 14:20

The Digital Collections digital repository at the University of Maryland Libraries is growing and in need of a new backend storage system to replace the current filesystem storage. Though not a traditional storage management system, we chose to evaluate Apache Hadoop because of its large and growing community and software ecosystem. Additionally, Hadoop’s capabilities for distributed computation could prove useful in providing new kinds of digital object services and maintenance for ever increasing amounts of data. We tested storage of Fedora Commons data in the Hadoop Distributed File System (HDFS) using an early development version of Akubra-HDFS interface created by Frank Asseg. This article examines the findings of our research study, which evaluated Fedora-Hadoop integration in the areas of performance, ease of access, security, disaster recovery, and costs.

http://feedproxy.google.com/~r/c4lj/~3/2dX00_-4Omo/8988 2013/10/24 - 14:20

The National Library Board of Singapore has successfully used Apache Mahout to link contents in several collections such as its Infopedia collection of articles (http://infopedia.nl.sg). This article introduces Apache Mahout (http://mahout.apache.org) and focuses on its ability to link content through text analytic techniques. The article will run through the what, why, and the how. If there is a big collection of content that needs to be linked, Apache Mahout may just be the answer.

http://feedproxy.google.com/~r/c4lj/~3/wzEuXIut9Xk/8912 2013/10/24 - 14:20

The general movement towards streaming or playing videos on the web has grown exponentially in the last decade. The combination of new streaming technologies and faster Internet connections continue to provide enhanced and robust user experience for video content. For many organizations, adding videos on their websites has transitioned from a “cool” feature to a mission critical service. Some of the benefits in putting videos online include: to engage and convert visitors, to raise awareness or drive interest, to share inspirational stories or recent unique events, etc. Along with the growth in the use and need for video content on the web; delivering videos online also remains a messy activity for developers and web teams. Examples of existing challenges include creating more accessible videos with captions and delivering content (using adaptive streaming) for the diverse range of mobile and tablet devices. In this article, we report on the decision-making and early results in using the Kaltura video platform in two popular library platforms: CONTENTdm and DSpace.

http://feedproxy.google.com/~r/c4lj/~3/ZbE9HQ-dpgU/9059 2013/10/24 - 14:20

This paper describes tools and methods developed as part of Linked Jazz, a project that uses Linked Open Data (LOD) to reveal personal and professional relationships among jazz musicians based on interviews from jazz archives. The overarching aim of Linked Jazz is to explore the possibilities offered by LOD to enhance the visibility of cultural heritage materials and enrich the semantics that describe them. While the full Linked Jazz dataset is still under development, this paper presents two applications that have laid the foundation for the creation of this dataset: the Mapping and Curator Tool, and the Transcript Analyzer. These applications have served primarily for data preparation, analysis, and curation and are representative of the types of tools and methods needed to craft linked data from digital content available on the web. This paper discusses these two domain-agnostic tools developed to create LOD from digital textual documents and offers insight into the process behind the creation of LOD in general.

http://feedproxy.google.com/~r/c4lj/~3/FsbjQ5Xdn8U/8670 2013/07/18 - 16:20

Although the Linked Data paradigm has evolved from a research idea to a practical approach for publishing structured data on the web, the performance gap between currently available RDF data stores and the somewhat older search technologies could not be closed. The combination of Linked Data with a search engine can help to improve ad-hoc retrieval. This article presents and documents the process of building a search index for the Solr search engine from bibliographic records published as linked open data.

http://feedproxy.google.com/~r/c4lj/~3/sGaDgO6E9X0/8526 2013/07/18 - 16:20

Analyzing anonymized query and click-through logs leads to a better understanding of user behaviors and intentions, and provides opportunities to create an improved search experience. As a large-scale provider of SaaS services that returns search results against a single unified index, Serials Solutions is uniquely positioned to learn from the dataset of queries issued to its Summon® service by millions of users at hundreds of libraries around the world.

In this paper, we describe the Relevance Metrics Framework that we use to analyze our query logs and provide examples of insights we have gained during development and implementation. We also highlight the ways our analysis is inspiring changes to the Summon® service to improve the academic research experience.

http://feedproxy.google.com/~r/c4lj/~3/tlIZsDpKSzA/8693 2013/07/18 - 16:20