This is all much more complicated than anyone could imagine.
Internally, the XPath/XQuery parser associates a zero-based character offset to each token, and the offset associated with xs:NMTOKENS
is (correctly) 20. The parser also maintains data retaining the offsets of newlines, so the character offset 20 can be translated to a zero-based (line=0, column=20). The question is then how this should be presented to users.
Because XPath is often embedded in a host language such as XSLT, this raw information is paired with information about the location of the XPath expression within a containing document (that's why it makes sense to maintain zero-based offsets internally). In the general case it's complicated by the fact that (a) a SAX parser doesn't give us accurate location information for each attribute (only for the element), and (b) before we get to parse an XPath expression held in an XML attribute, the XML parser performs attribute value normalization. However, an editor such as Oxygen typically DOES have accurate location information for an attribute, and also has access to the unnormalized value, so it is able (with a lot of effort) to combine our zero-based location information with its own knowledge of the attribute position to do accurate redlining of the error.
That explains the complexity, but it doesn't explain why we are outputting zero-based line and column offsets in the simple case where a query is read directly from an input file or a string on the command line. I think it would be appropriate in that case to convert the values for human consumption to a 1-based offset, which is probably what most users would expect.