Book Club time! I’ve really enjoyed reading Daniel Santamaria’s new book, Extensible Processing for Archives and Special Collections. I asked for the chance to review Chapters 3, 4, and 5, because I thought that they would most directly relate to the accessioning and processing that I handle in my job. Since starting to read, however, I’ve found that Extensible Processing is much more holistic than its chapter headings imply — so, my first recommendation is to just read the whole thing. There are pieces of each chapter that I have found incredibly relevant to my work, beyond the three chapters designated for processing, backlogs, and accessioning.
I really like how Dan offers practical steps for establishing an extensible processing program. In Chapter 3, he explains how processing collections should be viewed as cyclical, rather than linear; he argues that collections and their descriptions should “always be seen as works in progress” (p. 29). His approach to processing always focuses on getting the material available quickly, with the caveat that more arrangement and description can always follow later, if use demands it. Chapter 4 is tailored to addressing backlogs, and encourages institutions to use collections assessments and surveys as a means of defining and prioritizing their backlog in a structured way. Chapter 5 takes the processing guidelines from Chapter 3 and applies them to accessioning, encouraging processing at accessioning. (Dan is not a fan of archives’ traditional approach of adding accessions to the backlog.)
Of the three chapters I am reviewing for this blog, I found Chapter 5, on accessioning, to be the most thought-provoking. We implemented processing at accessioning years ago at Duke, but we constantly acquire additions to collections and find ourselves pondering the most practical approach to integrating these new accessions with their existing collections. Dan advises adding accessions to the end of the collection, in their own series; this is certainly an approach I have used, but it has always felt like something temporary. Dan suggests that I just get used to this lack of closure. (I’m fine with that idea.) Another solution we sometimes use at Duke is to interfile smaller additions within the original collection, a tactic that Dan opposes because it can take a lot of time and upsets original order (p. 61-62). My view is that interfiling one or two folders can be much easier (and more practical, in the long term) than adding folders to a random box at the end of the collection. But, even when I disagree with the logistics of Dan’s approach, I agree with his overall argument: when adopting extensible processing, accessions are no different than any other archival collection — and therefore it should be the user’s needs that drive our processing energies.
Whenever I read something well-written with lots of practical tips, I end up excited about getting back to work and just fixing everything. I found Dan’s review and application of MPLP more practical, and less perplexing (see what I did there?) than a number of other approaches I’ve read or heard about in recent years. It helps to remind ourselves that collections can always be revisited later. I appreciate his flexible approach to processing, which constantly keeps the user’s needs at the forefront. He believes in collecting and then using data, which eliminates the guessing game we often play when trying to prioritize processing demands. Furthermore, he repeatedly references the importance of buy-in from other staff and the library administration, a topic he explores further in Chapter 8. “Oooh, my colleagues would not like this” was definitely something at the front of my mind as I read his guidelines emphasizing the intellectual, rather than physical, arrangement and description of materials. I look forward to applying his suggestions as we continue to refine our processing procedures at Duke.