So earlier in this series Maureen looked at the chapters dealing with why repositories should implement an extensible processing program and Meghan looked at the chapters that talk about the hows of implementation. I am focusing here on who is involved in implementing and maintaining an extensible processing program. My review focuses on Chapters 6-8, sections that in one way or another assess the ways that an extensible processing program plays well with others, from the professional community and its systems (through the rigorous application of standards based description), with repository staff and administration (through effective management of staff and advocating to management and administrators), and with users (through seeing online digitized content as an end goal of the processing process).
One really important aspect of this book is that it makes a very serious case that while archival collections may all be unique, the ways that we approach them are not. The fundamentals of our work stay the same as does the end goal of quickly and effectively serving our user communities. Extensible processing techniques are carried out in similar ways at the collection level and the repository level, and they are supported and guided by widely accepted professional standards. While some detractors of baseline processing and other extensible processing techniques claim that these approaches are incompatible with standardized archival practice, Dan moves point by point through the most relevant sections of DACS explaining why the careful adherence to standardized description, far from being incompatible with minimal processing, in fact undergirds the entire enterprise of an extensible processing program. Archival descriptive standards are specifically designed to be flexible and to accommodate a range of levels of description and local practices. If they work right, and we do our jobs, they provide a way for the entire professional community to participate in and guide the principles behind individual processing programs at individual repositories.
So this sort of processing program is firmly based in broad professional standards, but on a more localized level there are any number of people that are involved in arrangement and description work. Chapter 8 focuses in on the repository level, and addresses how to lead, administer, and manage and extensible processing program, with a major focus on project planning and management. This section highlights one of the real strengths of the book– its concrete, realistic, and implementable advice. Santamaria walks the reader through various decision making processes, discusses criteria for priority setting, lays out specific elements of a processing plan, discusses resource allocation and personnel decisions, and how and why to adhere to firm timelines. This chapter is an excellent road map for a manager interested in talking the principles throughout the book and making them a reality. The specific suggestions are supplemented by a series of appendices that provide examples of processing plans and other forms of documentation to assist archivists in codifying their practice and moving towards and extensible processing model. This is a chapter I will be coming back to and reviewing when I need to manage new projects, create buy-in from staff, and advocate for extensible processing procedures to my management and administration.
The final people affected by our arrangement and description decisions are, of course, our users. Chapter 7, Digitization and Facilitating Access to Content, investigates user expectations around digital delivery of archival content (and our remarkable failure to meet them). Dan not only calls for digitization to be an integrated aspect of archival processing work (rather than a separate program) but frames this argument, usefully and importantly as an ethical consideration of equitable access to collection resources. He states that
Just as with processing, if our goal is to provide ‘open and equitable access’ to collections material, archivists need to use all the tools at our disposal to allow researchers of all backgrounds access, not just those who can afford to travel to our repositories and visit during the work week. 
He then goes on to suggest models for broad digitization and concrete suggestions for how repositories can work digitization into workflows, work with vendors, and manage privacy and copyright issues, but, for me, the heart of the chapter is the same message that is the heart of the book and of this processing model as a while, the insistence on equitable access.
These three chapters clearly articulate that the adherence to standards, the focus on end-user access, and the high levels of planning and management acumen that go into an extensible processing program serve to reiterate to the archival community that minimal processing is not lazy, sloppy processing. Dan reminds us, in what I think is one of the most important lines in the book, that
In an efficient extensible processing program the intellectual work of arranging material into broad groupings takes the place of neatly ordering items in folders and folders in series 
As archivists we add value to collections by applying our knowledge of how people and organizations work and how to think critically about the records that they create in that process. As a community we need to use our real professional skills to assess the records that our repositories hold. Quickly and competently assessing the nature of records is a difficult and skilled high level work; refoldering is not. We need to focus our professional skills and our repositories’ resources where it counts and where it is most likely to provide value to our various communities of stakeholders.
 Santamaria, Daniel A. Extensible Processing for Archives and Special Collections: Reducing Processing Backlogs. Chicago: ALA Neal-Schuman, 2015, 85
 ibid, 72