Guest Post by Keith Schengili-Roberts
A note from Noz:
Hi-di-ho, Readerinos!
With the holiday season and some intense projects I’ve not posted in quite a while. In a way I’m still not, in that today’s post comes from my new colleague and peer in the DITA community Keith Schengili-Roberts. I did an interview post prior to the Congility conference on Keith’s well received ditawriter.com blog, and I asked Keith to reciprocate. The resulting post below was actually inspired by one of our discussions while working together on some recent client work.
Enjoy!
PS – Here’s a reminder of what “NF2” means, if you needed one.
—————————–
It used to be that editors were much more common in the technical writing business. I have been around long enough to remember people who had “Technical Editor” as their formal job title. Over the years economic and production pressures have forced firms to hire more writers instead of editors. This often results with little or no oversight of existing content, furthering the pressures to silo writers and their content. Content was created, reviewed, and delivered, and rarely looked at again unless a customer raised an issue against a specific piece of content.
When asked, one of the aspects of DITA that most technical writers will agree is important to them is the ability to reuse content. Not just their own, but content that was developed by other writers. Whenever a piece of content is reused, it saves the writer that found it from having to re-write that piece of content. At the same time, the organisation benefits by saving cost, and the user benefits, because reused content makes for more consistent deliverables. I have noticed over the years that one of the other inadvertent bonuses of this approach is that topics that are looked at most – when being evaluated for reuse – become “edited topics” and are improved in the process of reuse. In DITA environments, I am seeing the return of the editorial process as technical writers review and inevitably revise content written by their peers.
Legacy Conversion Can Equal a First Edit Pass to Old Content
While working as an Information Architect I remember running across a classic case of this sort, where a writer had been doing the same manual for years, and took the approach of adding new material he received from the software developer SMEs piece-meal to the existing content. What must have at one time have been a decent manual had become a hodge-podge of barely-comprehensible content, with varying punctuation styles, and different terms (and spellings!) for the same items; inconsistency was the norm rather than the exception. Any editor glancing at this material would have immediately taken out their red pen and set to work (and in fact a section from this original work became a standard editing test for technical writer candidates looking for a job with the organisation).
Since things were so siloed nobody on the writing team had the opportunity to review this work, the hapless end-users had to decipher this deliverable as best they could. In moving this content to DITA the material finally had the chance to be thoroughly edited. Terms were made consistent and the content was cleaned up. Simply converting this legacy content to proper, topic-oriented and info-typed DITA ensured that it would be edited and made consistent, and its content ready to be effectively reused in other end-user content.
Image licensing – http://bit.ly/QL7K0Y
Reused Content Becomes Improved Content
I have seen evidence of content being edited and refined over time, so it becomes more clear and consistent as more eyes are brought to bear on the original versions. I remember another occasion where I was reviewing some end-user content, and I twigged that what I was reading seemed awfully familiar. The CCMS we were using allowed me to locate the topic and discover where else it was being used. It turned out that it was a concept topic that I had originally written for a highly-technical electrical engineering document. I could tell that it had been changed, but ultimately for the better, so that it could work effectively in two very different deliverables intended for two distinct audiences.
As a consultant I am now seeing similar situations at other organisations. It seems to be a natural outcome of the topic writing process when handled within a CCMS with decent search capabilities. This type of behaviour should definitely be encouraged, not only because it is good for the content (and its readers) but because it is good for the writers in several ways.
I find that the better writers on a team are natural editors as well, and allowing them to sink their teeth into someone else’s content means that they get to learn more about that content (and other products/projects) and it also opens up the lines of communication within the team, as would-be-editors ask the original writers about their content, helping to de-silo both the writer and the editor.
Get it? Reuse means looking at content with a fine-toothed comb…
Possible Pitfalls to Be Avoided
When managers see this type of behaviour it needs to be encouraged, but definitely guided. One of the obvious pitfalls when writers-cum-editors run rampant is that they revise material so that it fits their deliverable’s needs but not that of the original deliverable. As long as all of the writers are reminded of DITA best practices – that they need to check content dependencies and to talk to the original writer of the topic when changes go beyond mere grammar corrections/typos – this ought to solve this potential issue before it starts.
The other potential issue that can arise is that a writer may think they are being singled out by a peer and cause friction. There are few writers – myself included – who do not react with some shock when the red pen cuts deeply into their work. In this situation it is good to be able to point to an existing style guide (both for DITA mark-up and for writing generally) so that writer and editor know where they stand.
While many organisations will find it hard to create (or in some cases, reinstate) the technical editor role, having the editorial function emerge in the writing team is a natural outcome when you have good technical writers who are creating topics using a CMS. The best deliverables are those that have had several sets of critical eyes looking at them, and good editing makes for better writers overall.
It’s a win-win-win situation for the writers themselves, the readers of the content, and ultimately for the organisation.
—————————–
A note from Noz:
To formalise and drive forward the effect that Keith is discussing here, I have even gone so far as formally recommending to clients that they hire dedicated authors to coach and monitor writers. Which is the best way forward depends on the organisation and context, but the impact of reuse on editorial process is a positive trend in all cases.
To learn more about Keith, do check out his blog on ditawriter.com.
Surely the underlying problem is that clients don’t recognize the value of editorial work. So, we who do recognize its value have either to sell it under different names (e.g., developmental editing > content strategy*) or to sneak it into other processes (as in your DITA example).
Chris Burd
*OK, there are differences, but the jobs are similar.
I would definitely agree. I call it the ‘stealth recommendation’. When you’re trying to help someone who has an irrational dislike for elephants sometimes you have to say ‘Would you like (and pay for) a grey mammal 2 meters tall with a long nose, big ears, and big white shiny pointy things sticking out front?’ and when they say ‘Yes!’ you hand them an elephant* and say ‘Here you go! It’s called a Golden Retriever’.
I’m of the opinion CS work is always consultancy, even when you’re a staffer (thanks to Nikki Tiedtke for first putting that in words http://twitter.com/NikkiTiedtke). And consultancy is about getting the best out of the situation for the client, not about winning over everyone on every point of contention. If saying ‘You should get an editor’ fails, then selling it through the backdoor is a perfectly acceptable recourse if it’s really whats best for the client and their users.
*Bravo on those biceps
I completely agree that the more pairs of competent eyes that review a text, the better it becomes. I frequently read some technical doc in our company and think “Who on earth wrote this?” before I realise that I was the original author, two or three years ago. Since then my writing has improved, my product knowledge has deepened, and my understanding of the reader’s tasks has increased, so I know I could do a better job. That’s why a regular editing and review cycle is a really important factor. If DITA conversion is the catalyst for such a review, so much the better. (This article also dovetails nicely with the presentation I’m giving next month at Technical Communication UK!)
I’m sorry I’ll be missing that David! I’ll be at LavaCon and ContentStategyWorkshops half-way around the world at the time, so don’t take it personally!
We’ve all been there – returning to even our old stuff and going ‘Seriously…?’
Do you think we need proper staff editors or sneaking it in is ok? I’m interested in your opinion.
I believe this is a universal truth. When you change systems, everything gets looked at with fresh eyes and cleaned up. I call it the spring cleaning effect, and I think it produces a substantial uptick in quality and efficiency following any change in systems, completely independent of the virtues of the system itself.
In some sense, though, DITA may go beyond the normal spring cleaning effect. Reuse means that many eyes look at a topic from many angles, and thus it gets more and more cleaning. That might be a good thing, but it also seems to pose a conundrum.
Reuse in DITA is supposed to save the organization time, by removing the need to write and translate content multiple times. But if, as seems to be the case, a piece of content has to be revised several times before it it truly reusable, and if, as a consequence, it has to be re-translated several times as each revision occurs, and if it becomes necessary for anyone considering any revision to a reused component to check every situation in which it is currently used, isn’t all that overhead in danger of negating the cost savings that reuse was supposed to bring? And doesn’t this problem become worse over time as the content becomes entangled in more and more reuse scenarios?
Would it not be better to design the system such that any reusable object was a complete object in itself, with a clearly defined and documented purpose, which would work as well in any context without modification? True, such an object would be looked at much less often, and thus polished by fewer hands, but it would better fulfill the goal of avoiding duplication of effort in writing and translation. Additionally, the fact that it was written according to a strict set of rules to fulfill a strictly defined purpose, might serve to ensure that it was of sufficient quality the first time.
Hi Mark, I was surprised to see this comment here! I must have missed it! My apologies.
I think your conundrum is only an issue if DITA’s value proposition somehow stated all benefits will be realized in one iteration of each topic.
Keith’s point was that as eyes pass over topics, they get better and more and more reusable. What was not anticipated can be implemented during these passes. Yes, this means re-translation, re-review, etc., but the idea is that this is only happen if there is a purpose. The changes should be substantive improvements, not just polish.
Does this negate the benefit of DITA? Not at all – this *is* a benefit of DITA. DITA’s methodology builds this into your work and your culture such that you’re actually doing it as opposed to the more usual case: not doing it at all.
“Would it not be better to design the system such that any reusable object was a complete object in itself, with a clearly defined and documented purpose, which would work as well in any context without modification?”
To me this sounds dangerously close to “Why make mistakes at all when you can just do it right the first time?” Everything you’ve stated should be the case when any topic is written in the first place. That’s the process working perfectly, and when it does, yes, it does end after the first iteration.
But because life rarely lives up to ideal specs, in implementations it’s normal and acceptable to see an iterative improvement of topics until their content fits the definition well enough that they need not be touched again, possibly for years. Not to mention that you’re assuming it’s possible to set these clear documented purposes and defintions in the beginning with all possible reuse scenarios in mind. “Perfectly context agnostic content”.
We should all set up clear guidelines, and yes, be strict where necessary, but XML has been dogged by overly-strict, overly-enthusiastic implementations for years. It was a major barrier to adoption for many and a project-killer for many others. Having a looser system that builds for more organic and person-friendly improvements makes for a system which is used and supported more by the participants. Modular writing already implies a huge increase in up-front analysis and planning. It is important not to make the up-front overhead so great that the project looses good-will or worse, misses deadlines.
I’m not saying that you’re proposing to strangle users with rules, nor that your projects are in any danger, but simply that there’s nothing wrong with Keith’s approach either and it has real benefits in implementation. I always take a ‘simple as it can be to get the job done’ approach, especially with new projects where the organisation needs time to adapt to new tools, methods, structures, and concepts that disrupt their working lives.
In short, I’ll close with one of my favorite clients words. The CEO said they reached their great market success “by delivering 80% today, not 100% tomorrow”.
Hi Noz, no, I am not talking about not making mistakes, though we could do far more than we do to reduce mistakes — but I’ll get to that later.
I’m talking about the difference between giving content one role, and then giving it another role (reuse) and giving content a single role in which it is possible that it will recur in several places (recurrence rather than reuse).
Although Keith does not say so here, I have seen other DITA advocates talk about the need to edit content each time is it reused, simply to make it fit in each of the places it is reused. That is likely to have to happen when you assign one piece of content several roles, since the other roles were not fully anticipated when the content was first written.
Another cost of reuse is the process that Keith does mentions, which is evaluating content for reuse. Recurrence requires neither that that content be found nor that it be evaluated. It simply recurs when it should. In a reuse scenario, the author has to both find and evaluate the content, and may well end up editing it as well, which will trigger new rounds of translations, and may also require attention from the original author and any reusing authors to make sure it still conforms to the use they made of it. I have never seen any of these costs acknowledged in any DITA ROI calculator. (I’m not claiming to have seen them all, of course, but all the ones I have seen treat the reuse process as essentially without cost, which is manifestly untrue.)
Returning to the quality issue, every part of industrial production has undergone a revolution in first run quality over the past few decades. Improving first run quality pays enormous dividends in terms not only of cost control, but also margins and market share. The only industrial function that has not undergone such a revolution is content production.
Continuous quality improvement is not about having workers initially produce poor quality and then gradually improving it over time by having more workers fix it up bit by bit. We would scoff as such a process in any other part of the plant. If that is a benefit of DITA, then DITA should be a laughing stock for having entirely missed the point.
Continuous quality improvement is about continuously improving your processes to reduce the number of defects you are creating, and thus improving first run quality. This is something that content producers have conspicuously failed to do over the last several decades, and DITA does absolutely nothing — does not claim to do anything — to change this.
Structured writing, properly applied, can play a significant role in improving first run quality. It can’t, of course, do it all by itself. You have to have the right people with the right knowledge, and the right culture. But structured writing provides the tool that the right people can use to greatly improve their first run quality, and to produce content that can recur, rather than having to be manually reused.
You are correct that some strict XML systems have failed in the past. They were strict about the wrong things, and were implemented without proper understanding, proper training, or proper approaches to information design. Running back to sloppy schemas that implement no discipline, and leaving yourself back at multiple refinement steps before content approaches anything resembling quality is simply running back to the old artisan model, but wrapping it in angle brackets.
Straw, meet man.
Mark, what you describe as “recurrence” is simply proper reuse, inside or outside DITA. What you describe as “typical DITA reuse” is bad reuse, in any architecture.
I would also like an explanation of how DITA prevents (or some other architecture enables) anticipating additional roles (“since the other roles were not fully anticipated when the content was first written”). What makes DITA particularly good or bad at this?
Sarah,
I fear I may not have made the distinction I was making entirely clear. Recurrence, as I have dubbed it here, means that no human effort, selection, or inspection is involved when the content recurs. It recurs because it was included in a query by the terms of that query. This is how content gets into a database report in a typical database system: because it was returned by the query, not because somebody deliberately mapped it into the report.
In such a system, the recurrence would not trigger an edit cycle, such as Keith describes, because no one would look at the content at the moment of recurrence. In fact, no one would necessarily be aware that the content had recurred, just as in a database, no one is likely to be paying attention to how many times a particular row has occurred in reports.
So if that is good reuse in any system, and if reuse that involves human action to look at a topic and select it for reuse is bad reuse, then the reuse scenario that Keith is describing is bad reuse, and you and I are actually saying the same thing. (Note that Keith says “I could tell that it had been changed, but ultimately for the better, so that it could work effectively in two very different deliverables intended for two distinct audiences,” which implies that he did not anticipate the eventual reuse when he wrote the topic, since it had to be edited before the second use to make it work in both places.)
But since you accuse me of setting up a straw man, I suspect that is not what you mean. So I’m going to guess that you mean something different. Please forgive, and correct, me if I guess incorrectly.
My guess is that what you mean by good reuse is that the writer writes the content with full knowledge that it is going to be reused, and does everything they can to make it reusable. That’s reasonable. But then the question is, what help does the system give them to do that?
It’s pretty difficult to do the exercise of anticipating potential reuse scenarios for each individual piece of content you write. If you want the process to be reliable and repeatable, you will need to define a template to guide authors in creating different kinds of content.
You could do that in DITA using a specialization to create a strict template for a specific kind of content. It’s not my impression that most people do that (none of the descriptions of DITA reuse that I have read mention it, but I have certainly not read them all).
But if you do that, and you create a schema that guides writers to consistently create reusable content, and you attach enough metadata to it to make it clear what it is about, why do you still need to have the reuse done by having someone hardcode ids into a map or a conref? Why not let it simply recur as the result of a query?
Part of the answer to that, no doubt, is that reuse by specification of ids allows you to reuse content in arbitrarily structured documents which could not be created by queries because of the arbitrariness of their selection and ordering of content. They are cherry-picked, rather than built by rule.
I’ll concede that point, but I would also argue that we should be creating far fewer of such arbitrary documents. Their production is too slow and too labor intensive for a world demanding immediacy and brevity in communication. Every page is page one.