In my inaugural post for the SharePoint Gone Wild blog series, focusing on when governance lacks accountability, I introduced the key business drivers for governance. Now that we’ve addressed that major pain point, to continue this discussion in the near year on another challenge: governance lacking in quality. I believe this is a great segue from accountability, because regardless of whether content has accountability, unless the person accountable for the content has any responsibility in maintaining the quality of the content, it will become a risk to the organization.
In the discussions I have with AvePoint’s existing customers regarding governance, quality is a big issue and can cost the business in various ways, including:
· Making decisions based on inaccurate information
· Sending out inaccurate information externally
· Duplication of content, which leads to confusion on accuracy
There are a few common scenarios that typically lead to low quality:
· No accountability – Referring back to my previous article again, if there is no accountability, the quality will definitely suffer.
· Initial push – When someone is made accountable for a new sub site or taking over an existing one, there is often a concentrated focus of improving the quality of the content. This focus, more often than not, starts to dwindle over time as there is typically no process in place to remind the people accountable that their content needs some “tender love and care”.
· Content Lifecycle – The lifecycle of content differs based on the types of sub sites that exist; for example, a project team site may only be relevant for six months and then people move onto other projects – therefore the content is not updated or finalized, and left in an unfinished state.
· Duplication – In some instances, where there is no clear distinction of accountability, the same content is created in two different areas on a site, complete with different accountability owners. This typically happens when there is poor information architecture, resulting in content created in disparate locations because knowledge workers believe it is the best place for the content to “live” based upon their own interpretation of best practices.
Some of the organizations I work with have some interesting ways of trying to mitigate the risk of low content quality in the field. The most common one is when an organization has performance measures in place for information managers, librarians, or records managers based upon how well they maintain content quality.
Typically, teams responsible for quality are allocated areas of the intranet or other workloads. Putting the people in place isn’t the most difficult part, though: It is how to measure quality of content. Essentially, many organizations had a request system in place to report potentially low-quality documents. Once reported, the requisite quality teams would follow-up and check the content. The request system was communicated internally, and had a rewards system in place to encourage contributions.
While one person may be accountable for content, it might make sense to identify subject matter experts who can act as content quality control agents. Why? Generally speaking, the person put in charge of content isn’t always the right person to gauge its quality.
The other approach is to say that the accountable stakeholders for a sub site would need to re-publish the document to a new major version every six months. The librarians would then produce reports on which documents hadn’t been re-published, and risk can be assessed by correlating this report with information on how frequently the content is accessed by leveraging the web analytics reports or auditing reports.
The next business driver for governance I’ll tackle is “appropriateness”, which will be covered in my next post.