My second session discussing Author Experience was with Noz Urbina. Our wide-ranging discussion touched on many points, starting with the fundamental questions: Why author experience? And why now?
Content Strategist, Urbina Consulting
Noz: I suppose I have to ask: why do we need Author Experience as a field? And why is it particularly relevant now?
Rick: To answer that, I can paraphrase from the introduction (p3).
Communication is the exchange of information between parties. There are three basic forms: the first two are real-time: acting and speech. With the third, writing, our communication becomes non-real-time: there are separate stages of input, storage and output.
Until recently, non-real-time communication involved a single channel. The input process created the stored content, and that was the only version available for output. With digital communication, the logic is inverted but the result is the same: we start by considering how to deliver the content, which determines how we store it, and so how it will be input. There remains a direct link between the three stages.
This approach fails when we want to reuse content across channels. And multi- or omni-channel content delivery is now fundamental to staying competitive.
Noz: You’re right. With more than one delivery channel, we must stop writing in the presentation format. And that has implications for the author: if you’re not writing in a presentation format, you’re writing for multiple things at the same time.
That will be a serious challenge: it goes directly against everything we’ve been taught about how to write.
Rick: We need to train authors in a new approach to crafting content.
Noz: That’s a big ask. But there’s another reason we need to change how we approach content authoring, which will also need as much of a change in mental models. (And if we are making one change, let’s make both together.)
This is something so recent, I think even we manage to forget it a lot. Considering only the web – not even worrying about multi-channel – we have to think of the needs of what Tim Berners Lee tried to call web 3.0: the semantic web, with linked data. In this new web – the web of tomorrow, and a little bit of today – our content needs to be machine-readable, as well as human-readable. Our content needs to be exposed to search engines and other systems.
Rick: As that great example you like to show with the search results for ASDA Sutton opening times: Google providing an answer that it has extracted from the site’s content, without needing to send you to the source page.
Noz: Exactly. Authors need to be able to embed semantics – microformats and schema.org – into the content they create.
Bing, Google, Yahoo! and Yandex have all bought into semantic content. They have said your search results ranking will go up if you include intelligent content within ordinary web channels.
So even for those thinking only of a single channel ordinary web, we must have smarter authoring tools. All authors – technical or not – need to be able to do this, with ease.
Rick: Right. We can’t expect them to manage the code-level structure of schema.org metadata. I’ve done that manually; I found it onerous. (And I’m pretty sure I have a higher tolerance than most for such pedantry.)
Noz: Yes. For both multi-channel and the semantic web, it is all about putting our granular-semantics-enhanced content into the system.
As humans, we can intuitively understand that a particular sentence or turn of phrase is appropriate for people in a particular situation. But without semantic data, that is near impossible for a computer. With the added data, it’s easily within the realm of computation.
That, I think, is a big part of the why – of the value add.
Rick: We also need the different parts of the authoring system to be able to communicate. We need one content language that all applications used to create and manipulate content understand. We need content shared and reused within the organisation.
The semantic structure we’ll get from better authoring tools will move us in that direction. Indeed, I think human-understandable authoring that translates automatically into semantic content is fundamental to enabling multiple tools to interface with each other’s content.