[Date Prev][Date Next] [Thread Prev][Thread Next]
[Date Index] [Thread Index] [New search]

Re: Reading the runes

Dan Emory wrote:

LK>>I suspect that Adobe will add export filters (or in the case of
LK>>Frame, a WWP project) to their products.
LK>>These filters/projects
LK>>would write XML that conforms to one or more well-defined XML DTDs.
>[...]Standardization on a few DTDs is precisely the wrong
>direction. XML is intended to facilitate all types of information
>interchange, [...] the approach
>you describe will always lag behind.
>Not only that, but you do not mention the requirement for importing, as
>well as exporting, XML. The most likely scenario is that authoring
>will (usually) take place in WYSIWYG DTPs, but the exported XML
>is parsed into its constituent elements and stored in a database,

Once again, let me stress that this is all rank speculation on my

You're correct, *if* you assume the flow is to be bidirectional.
I'm assuming a *unidirectional* flow:

	FM -> XML DB -> presentation

If you're assuming that the back end (i.e. the presentation side)
is driven by XSLT, you don't even have to parse out the pieces --
retrieve a document, let the transformation weed out what isn't
needed, convert the rest to the proper presentation format.

This approach has several advantages:

  - You create content using your current authoring tools (and
    if you have to change something, you do it there & re-export).

  - Adobe can implement the exporters very quickly once the DTDs
    are agreed upon.

  - If Unicode is a necessity, it can be done outside of FrameMaker
    (allowing Adobe to include FM as part of the system without a
    complete rewrite).

And yes, there are disadvantages as well. Besides having to
re-export for even the smallest changes, you'd better have some
heavy-duty servers for all this.

I'm actually working on a very cut-down version of this very
thing, a database of XML fragments with some CGI backends to
extract and convert on the fly. I'm implementing the entire
thing with a pile of Perl scripts.

We may also have a slight disconnect with our definitions of
"collaboration." I took this to mean teams whose individuals
produce atomic units (I write text, my wife edits the video,
someone else produces a soundtrack, and none of us intend to
change the other's parts) that make up the final product.
You seem to be assuming a much finer-grained approach:

>where change control/revision tracking. check-in/check-out, and
>other similar functions are managed. Consequently, the authoring
>software must also have the full capability to import XML. This
>approach greatly facilitates information reuse and repurposing,
>as well as collaborative authoring. ...
>[...] the author doesn't have to check out an entire
>document. Instead (s)he checks out only the chunk that needs
>changing, which remains locked until the revised version is
>checked back in. This allows revision tracking to any level of
>granularity, and also permits many authors to work simultaneously
>and without conflict on the same document.

A very elegant solution, indeed.

This is kind of fun -- I think we're both interpreting the tea
leaves through our personal philosophies. My approach cobbles
together existing tools to get something in place right away.
Quick, dirty, cheap, workable -- the AppleScript approach. :-)
Your approach is obviously more elegant, and would likely be a
joy to work with, but how long would it take to implement? And
at what cost?

>Converting unstructured Frame docs to XML via paragraph-tag-mapping
>algorithms is not a workable solution, [...]  particularly true
>in the case of legacy documents ....

True. BUT, I believe that it's a moot point. Network publishing
will require significant changes to workflow -- perhaps to the
point where WYSIWYG becomes a hinderance rather than a help.
(If content has to work with everything from paper to cell
phones, which environment does your WYSIWYG tool assume?)

Documents created in PageMaker or InDesign will have this
problem as well -- and there's no InDesign+SGML in the works
that I'm aware of. However, Greg Henderson wrote this back on
September 21:

    ... (in my own biased opinion). You can use FM markers
    to achieve the same result as applying SGML markup. For
    example, if I am developing [a procedure], then I can
    enter a marker in the procedure title that identifies
    every screen in the product GUI where that procedure can
    be performed or used.

To bring legacy documents into the new environment, you'll have
to rewrite them anyway. Whether you use a formal structure, or
tag internal fragments in some way that the exporter understands,
you still need to think about how various devices will use your
document. (This is also true for new content.)

>Moreover, the limitations of the tag mapping approach precludes
>the important ability to add metadata, in the form of attribute
>values, as well as the ability to export complex multi-level
>structure that is beyond the capabilities of the tag mapping approach.

Metadata: depends on what metadata you want to add. High-level
metadata, such as content summary, keywords, and the like, could
be entered from a dialog box at export.

And you're certainly right about exporting *complex* structure --
but any document that uses style tags properly (and they do exist)
has a simple structure implied by those tags: sections are separated
and nested by heading level, lists by consecutive bullets, and so
forth. Not perfect by any means, but probably good enough.

This is how FM+SGML structures a document with conversion tables,
after all. I've done it, and it's not pretty, but it works. So
you *can* use tag mapping, if you invest time up front -- or, you
can invest roughly the same amount of effort into structuring
the document. Better yet, start with a structured template and a
lot of these issues go away.

Presumably, Adobe will furnish templates that are wedded to a map
of some sort -- if you add or change tags, you'll have to fiddle
with the map too.

>The only Frame product capable of delivering what is described in
>Adobe's press release [...] is FrameMaker+SGML. But that product
>needs extensive enhancements [....]

Once again, it boils down to what assumptions we make. All of the
enhancements you describe are good things, and will be needed if
FM+SGML is to stay viable in the long run. I made my assumptions
starting with what FM and the rest of the Adobe stable is capable
of *now*, adding the minimum necessary capability to export to a
Grand Unified Interchange Format.

>>And (here's where the partnerships come in) various presentation
>>devices would extract only the info that it can use....
>XSLT (XSL transformations), which is part of the XML standard set,
>is intended to do what you describe above.

Exactly my point; I should have made that clear originally. And
that's how I arrived at my assumption of the standardized
collection of DTDs -- each device has to know what it should
extract and what it can safely ignore (using XSLT). To do that,
the devices need to know which tags are which. Ergo, the need
for standardized DTDs -- it's far easier for Nokia and Motorola
to work from the same XSLT page than it is for content providers
to create two separate documents with Nokia tags & Moto tags.

(Again, I'm using the lazy man's approach of cobbling it all
together from existing parts and a roll of duct tape. But this
is essential if Adobe Studio is going to be live in spring 2001.)

>When you read through the press release, you're forced to conclude
>that, in essence, Adobe's vision now includes its commitment to XML,
>But until Adobe issues a press release explaining in detail how it
>intends to implement what its "vision" is, particularly as it relates
>to FM+SGML, I remain skeptical.

Me too! But it's loads of fun to try interpreting these press


** To unsubscribe, send a message to majordomo@omsys.com **
** with "unsubscribe framers" (no quotes) in the body.   **