A Sign of Good Things to Come
This year I was getting (back) together with speakers and thought-leaders whom I’d not seen in years, and others who I’d never seen present before at all. Still, surprisingly and wonderfully, as a group we were basically all on the same page when it came to most of the best practices, and what the key issues holding organisations back today. The market, both in thinking and technologies is experiencing consolidation.
It’s great news, because unlike 5 years ago when we were discussing the technical detail of how systems should be built, the main topic at content conferences today has become what it always should be: people and process first, technology second.
It was a bit concerning for a regular conference speaker, actually, because everyone being on the same page means now we all have to work that extra bit harder to bring our unique bit of thinking and value to the party.
Here are some common themes:
The ‘Holistic’ view
The word ‘holistic’ popped up regularly; the ‘big picture’ / ‘cross-functional’ view.
Content integration across disciplines is important to getting the most out of your content investments. Having some sort of workflow that allows you efficient touch-points with Subject Matter Experts and / or unstructured islands of content in business is vital to not creating yet another ‘silo’d’ process, and missing huge opportunities for collaboration. Also, the migration of people, process, content and culture takes time, so you have to have a migration plan and project scope that understands this process.
For example, you want to create knowledge bases powered by a Component CMS, and have SME contributions in there. Have you got a workflow that allows you to capture that content in a way that works for them? Maybe in an easy-to-use XML tool, or in an unstructured way to which you can add key metadata and/or structure? If you want them to contribute directly into your knowledge base, have you connected with their managers to make sure there are incentives in place in their performance evaluations to share knowledge? If knowledge sharing is always a ‘bonus’ thing, and not a real staff responsibility, then few will take the time to do it.
Gabor Fari, in an excellent presentation, took time to illustrate the important lines between DITA as a standard and the metadata implementations required to make it a real intelligent content solution, referred to the need for easy-to-use systems that represented metadata to users so that just about anyone could pull together dynamic documents as easily as doing an iTunes playlist (a sentiment which I’ve been known to endorse as well).
You’ve got to make it easy for many types of user to participate in some way with your content strategy.
DITA we get it
When Michael Boses from Quark asked if anyone in the room didn’t know what DITA was, not a single hand was raised. Taking into account the shyness that probably kept a few hands down, when we asked similar questions 3 years ago, there were still lots of hands in the air!
Metadata is the key
I didn’t even think about this because it’s been so fundamental to my thinking for a few years now. What really made it click for me was when someone Twittered during my talk that in ‘tomorrow’s content systems most links won’t be made because a user decides point A should link to point B, they will be made automatically because point A and B share common metadata which gives them an automatic relationship’.
The more intelligent the system and the more self-descriptive the content, the more navigation can build itself. This is quite advanced for many people and quite clear for others, I think it needs to be a topic of an upcoming webinar using concrete examples of how such relationships can be taken advantage of.
Metadata – Search / Faceting
Search everyone’s favourite thing right now – helps you find something, but metadata helps you know what you’ve found. Gabor, I and others were talking about how brute search doesn’t cut it when you’ve got multiple slightly different versions or copies that are differentiated only by their metadata. Metadata is also vital for narrowing down search hits in a large enterprise scenario. This is often called ‘facetted search’. The Semantic Web is highly based on this principle.
Metadata – Convergence
Rahel Bailie’s presentation re-enforced this further by talking about content that isn’t just reused and single-sourced across deliverables on the way out (the PDF, CHM and web version of the same thing), but recombined into ‘portals’ that bring together content based on their taxonomical (metadata) similarity. The difference between a product portal and any other product info web page is a bit philosophical but the high-level distinction is: a traditional web page is a place to which you push product information, a portal is a page which pulls product information together from various places and integrates them together.
Another item from Rahel’s session which I thought was pretty unique, but still metadata related was “sunsetting” of content.
- Does your new content architecture take into account the need to pull content down from the places it’s delivered?
- Can you automate notifications to users that might have obsolete content to go get updates?
- When setting up facetted search, can you filter out lots of redundant hits by removing things that have outlived their usefulness?
Food for thought.
Measuring ROI Isn’t Easy
Measurement of ROI is critical, but it’s just as much a dark art as it always was. There are finally case studies that show cost reduction from reuse as opposed to just localisation costs (the classic low-hanging fruit in the ROI tracking world ), but isolating and reporting on the impact of your content improvements from other corporate initiatives is very difficult.
Still, you’ve got to do your best.