[Date Prev][Date Next] [Thread Prev][Thread Next]
[Date Index] [Thread Index] [New search]

RE: glossaries



Rick and Deborah are having problems reusing content to generate global
glossaries.

If you do a lot of documentation where reusability and management of
arbitrary texts is a significant consideration, you should probably be
working in a content management environment designed to support this, such
as Xyenterprise's Content@-XML or possibly Chrystal's Astoria.

At Tenix, we are implementing RMIT University's locally developed SIM
product (http://www.simdb.com/) to manage maintenance documentation for the
ANZAC frigates. By implementing single sourcing, applicability and
effectivity concepts we have already cut our documentation management
requirements for ship equipment maintenance routines by 80%, delivery
requirements by 95%, and turn around time from a year (updates to complete
ship sets of documents) to a couple of days or less (edit, review and
release single class-level documents) together with major quality
improvements in the documentation from doing the work in an enforced and
auditable workflow environment. (My case study of our experience is due for
publication in the May 2001 edition of Technical Communication - preprint
available on request if you have a need for the information).

For example, although we write maintenance routines as documents in
FrameMaker+SGML, we view the resulting document via a Web browser in HTML
and SIM automatically outputs them to our Client's maintenance management
system as a set of relationally-based comma delimited data files and an HTML
formatted procedure text. We also extract a variety of data reports to
estimate logistic support requirements (e.g., equipment downtime forecasts;
usage requirements for parts, materials and fluids; etc.) from the same
source documents held in the repository. All of this is done automatically
in the background, so our authors don't even think about it. 

Such a system can be quite cost effective for a large doco department (e.g.,
five or more full time authors) dealing with long-lived documentation that
goes through many revisions, as it provides many different kinds of savings
over the document lifecycle.

We are currently starting to implement Release 2 of the SIM DCMS product,
which will provide us with full management and reuse to the level of single
SGML elements, automatic detection of similar texts, and a variety of
annotation and change tracking functions also down to the element level. We
anticipate this will give us another 50-75% reduction in the volume of text
manage (based on estimates of redundancy across documents), and a major
improvement in document quality from being able to standardise our
descriptions of actions (i.e., write once, use many times).

The kinds of global glossaries Rick and Deborah describe would be very
easily provided from this kind of environment, as it would simply be a
different output extracted from the document repository. 

With content management, glossary definitions could be maintained in a
single document - with changes proliferated as automatically as you like
through all the other documents using the particular definitions. Basically,
we start managing content as identifiable blobs of knowledge able to be
proliferated through a web of documentation rather than strings of
characters which have to be handraulically typed in wherever they occur.

Sorry about the evangelism, but I just completed taking a SIM update course
last week and I am still overwhelmed by what can be done with the technology
- Tenix has barely scratched the surface!

Regards,

Bill Hall
Documentation Systems Specialist
Integrated Logistic Support
Tenix ANZAC Ship Project
Williamstown, Vic. 3016 AUSTRALIA
E-mail: bill.hall@tenix.com <mailto:bill.hall@tenix.com> 



-----Original Message-----
From: Deborah Snavely [mailto:dsnavely@Aurigin.com]
Sent: Tuesday, 27 March 2001 8:39
To: framers@omsys.com
Subject: Re: glossaries


Rick Henkel asked:
>My team lead wants a large glossary for our entire documentation suite.
She
>then wants to use the applicable parts of that glossary for smaller
>glossaries in individual manuals.
>
>My first thought was to use a custom marker for each manual, use the
entry
>text as the marker text, and automatically generate a glossary for each
>manual. We have character tags assigned to certain words in the
entries, but
>I figured I could use IXGen to easily add that formatting. But I soon
>discovered that several glossary entries are too big for the marker's
255
>character limit, especially when I added the character tag building
blocks.
>
>So then I turned my attention to cross-references. I figured a writer
could
>create a cross-reference to the master glossary for each entry he needs
in
>his manual. Because we wouldn't want the cross-references linked, we
could
>just convert them to text when the glossary is built. (But any changes
to
>the master glossary would not automatically make it into the little
>glossary.) But the character tags weren't coming through because the
>cross-reference just uses the paratext building block. So the writer
would
>still have to manually insert the character formats.
>
>I guess I have a few questions for you all:
>
>* Is there a building block I could add to the cross-reference format
that
>would bring the character tag info across?
>
>* Is there another method that would work better that I haven't thought
of
>yet?

Rick,

A couple of thoughts. It sounds to me as though you're going to bump
noses with that 255-character limit no matter what, but here's what I
know.

One is, you can set the font back to default in a marker following an
entry using a shortcut </> (I think...it's been a year and I haven't
used it since; it's in the doc somewhere, I believe.) This shortcut can
save a LOT of space in complex marker entries full of formatting. 

The other is more of a thought-starter. It's an approach that I used for
a multi-group, multi-document glossary at a previous work site. 

Our goal was to examine and normalize glossary definitions across
documents. I assembled existing source glossaries, applied different
conditional text settings to each source-book's glossary content, and
then slam-dunked the whole into a table for quick sorting, then
converted back to text. 

My work focused on eliminating duplicate as much as possible. Sometimes
you need a short definition for a user guide and a long one for a
programmer guide, but I normalized the short defs to a standard style,
expanded and cross-referenced all the acronyms, etc., etc. 

Later, when any of our writers were going to do a new version of a given
book, we'd use conditions to given them only the appropriate
glossary/ies for that book, save it to another file, delete hidden
conditionals, and give THAT file to the writer. Updates happened after
that book went to bed, in one of two ways: 
1) with limited changes, the writer sent me email of the updated entry
and text, and I updated the super-glossary; 
2) with massive changes, I treated the new final glossary as a new
glossary for incorporation into the "super-glossary," except that I need
not create a new condition for that source book.

Clearly, neither my approach nor yours is completely appropriate, but
the only thing I can think of that really is appropriate is a
glossary-as-database. And who has those kind of resources?

Deborah Snavely
Document Architect, QA & Docs, 
Aurigin Systems, Inc. http://www.aurigin.com/ 


** To unsubscribe, send a message to majordomo@omsys.com **
** with "unsubscribe framers" (no quotes) in the body.   **

** To unsubscribe, send a message to majordomo@omsys.com **
** with "unsubscribe framers" (no quotes) in the body.   **