ZIG Meeting

Minutes from the Palo Alto ZIG meeting, March 17-19, 1999, are available for March 18-19 only, and are provided by Denise Troll.

March 18, 1999

Palo Alto, CA



5 & 6 Proposed record syntax registration (MARC formats an subformats)



Five new MARC formats proposed for registration. Also proposed registration for MARC record syntax subformats for bibliographic, authority, holdings, community information and classification.



Stovel: register only the formats and subformats that really already exist. Agreed. Issue about the object identifiers. Agree that if number is unqualified (for example, USMARC 1.2.840.10003.5.10), then assume it could be anything (bibliographic or any of the other subformats). Zeeman is concerned about objects that aren't leaf nodes on the tree. Denenberg: there are other instances of that, for example, Extended Services. To explicitly request bibliographic, specify 1.2.840.10003.5.10.1.



7. Bib-1 Diagnostics Extensions pending approval



Approved.



8. Amendment AM0005: Z39.50 Duplicate Detection Service submitted for approval

At Madrid ZIG, approved Duplicate Detection Service. Amendment approved.

9. Commentaries



Single-PDU, Multi-database searching - should it be supported?



See also Clarification on this topic. Denenberg: the clarification is worthwhile because it addresses questions that many people have asked, but he's not sure the commentaries need to be published.



Denenberg: "In the process of [previous] debate, the discussion became wild and furious" over handling of many virtual databases. Is multi-database searching meaningful or an abstraction? In the past, we explicitly avoided searching multiple databases distributed across multiple servers.

Single-server, Multi-database searching - should it be supported?



C Lynch thinks the single-PDU multi-database searching is OK, but the idea of searching multiple databases distributed across multiple servers is beyond the scope of the protocol (though you can build systems like this). If we include the commentary on single-server multi-database searching support, we should clarify the scope of the protocol in the commentary.

Zeeman sees a similar scope issue in single-PDU multi-database searching support.



Denenberg: the whole point of this is so that when someone in the future asks whether multi-database searching is good or bad, that we have commentary to point to that reflects a fair amount of discussion on this with no consensus reached, but here were the different views. If these commentaries don't do this, we should abandon this. The commentary does not take a position on the issue. Most rational people here would take one particular position, but his sense of the discussion was that a handful of people felt so strongly about this that we can't come up with a commentary that takes a position without creating controversy.



Hinnebusch doesn't agree. We never have 100% agreement on anything. This issue is not that contentious in the ZIG. Stovel: the standard clearly allows you to search multiple databases on a single server regardless of the number of PDUs. Denenberg: the question here is philosophical. Is the question even meaningful?



C Lynch: it's an application design issue. Question about diagnostics and whether they can be provided. In some cases, yes diagnostics can be returned. Need to clarify this. Must define what multi-database searching is outside of a single PDU. What are the implications for the protocol.



St Gelais is not sure if the single-server commentary is meaningful or necessary. Denenberg: let's drop it. Agreed.



Denenberg: What are the implications of non-single-PDU multi-database searching for the protocol (for example, a follow-on search to a result set where you indirectly attempt to do multi-database searching)? Stovel: we need a diagnostic for that situation. Hinnebusch: is it meaningful to have a diagnostic for the case of multiple databases different from the diagnostic for the case of a single database? Is this a red herring. C Lynch: could craft diagnostic that says "you're trying to do a multi-database search and I don't do that." The problem is what to do at the client. Can clients do anything useful with such a diagnostic? Stovel: a generic client will try more than one server. She wants a diagnostic that says "cross-database operations are not done by this server." Zeeman: we have a diagnostic for this. Debate among Stovel, Zeeman, Taylor.



10. Clarifications



Keyword Searching of Bibliographic Systems



Previously proposed as an Implementor Agreement; technical content has not changed.



Moen: Why is this not an implementors agreement? Hinnebusch: it was political. As an implementors agreement, vendors could ignore it. As clarification of the protocol, they cannot.



EXTERNAL Definition for "Record" in Update Extended Service



Zeeman wants text to say that this requires some a priori agreement between the origin and the target system. Denenberg: that's beyond the scope of the original question. Zeeman: it's inherent in the question. Denenberg: prefers to keep Q&A straightforward and not answer questions that haven't been asked. Zeeman: if this is all it's going to say, the clarification is not worth it. (It repeats only the definition.)



Agreed to delete this clarification.



Ascending and Descending by "Frequency" in Sort



Taylor: after you do the sort in the example, what are you left with? C Lynch: make it explicit that without a second sort key, the list appears anyway the server chooses. Slight revision required then approved.



Single-PDU, Multi-Database Searching



Discussed above. Further discussion and request for clarification about the two cases described in the text.



Character Set Negotiation -- Implementation Level and Collections



What do "implementation level" and "collections" refer to?



Discussion deferred until tomorrow.



11. Encapsulation



Proposed amendment Z39.50MA-AM0001.



Instead of concatenating PDUs, they should be nested. We've been discussing this for years. Formal discussion in Brussels in 1996 got out of hand and we deferred it until June 1998 in Washington D.C., at which point LeVan said this was the wrong way to do it. He was to post an alternative way to do it to the ZIG list, but turned out he didn't have a better way to do it at that time. The issue came up in different contexts since then (though not at the Madrid ZIG), for example, discussions of statelessness of Z39.50 on the web. There is reason to reconsider it now and recently LeVan suggested a way to do this.



The change suggested was to encapsulate the search within the init, encapsulate the present within the search, etc. If you want to sort, the sort would be encapsulated within the search and the present would be encapsulated within the sort. From the discussion on the ZIG list, this seems to be what people want. Hinnebusch: the recursive model implied is much easier to work with.



If you have nested PDUs, what happens when you have two searches and each has a sort and present? Hinnebusch: you can now bundle all of this in a single PDU. This provides flexibility. Denenberg: if we begin discussion of combining additional conditional logic with this, we'll never get this approved. This is one of the reasons why the discussion in Brussels did not reach consensus. Hinnebusch: conditional logic does not make this more messy. Denenberg: this proposal avoids getting into treating all of those conditional cases, and because we're changing from concatenating to encapsulating, he's hoping this does not open the door for all that. He doesn't think we can approve this at this ZIG. We need to examine this further. Denenberg will rework this and post it again so we can take action at the next meeting.



Hinnebusch: do we have consensus that encapsulation where PDUs are stacked is better than nested sequences. The text must explicitly say that multiple nested PDUs are not allowed at any level. Stovel: why is encapsulation better than nesting? Taylor/Hinnebusch: less code to write.



Jacob: Can a Close be followed by another init? Text should state whether this can be done. Hinnebusch: an init cannot be encapsulated. Text says that in number 2. Denenberg: we're trying to achieve statelessness. Hinnebusch, the stated goal is a single stateless search response, give sort information with a search PDU (efficiency), and provide a mechanism to get rid of the ugliness of piggyback present.



Stovel thinks we should allow this. Hinnebusch/Denenberg: we are being arbitrary and capricious in saying we can't do this, but it's too complicated to allow. People already think the protocol is too complicated. C Lynch: allowing this would open a can of worms.



Hinnebusch: maybe we need a ZIG clarification of this. Denenberg: agreed to revise the text.



Olsen/P Henrik want to nest (serialize) PDUs within concatenated (concurrent) operations for a multi-user system with work to be done within one PDU. That would be more elegant than opening many connections, though admittedly it could delay the response (if one of the queries was very large).



Denenberg: easy to modify definition in the future, so let's exclude it for now and we can change our minds in the future. Agreed.



12. Z39.50 Thesaurus Profile (zthes) - Taylor



Taylor: After the Madrid ZIG, he talked to ZIG folks and prepared the thesaurus profile so we could discuss at this ZIG. There are 2-3 implementations: a server and a client (built out of a gateway). Two goals for this meeting: look at areas in the profile that refer to the standard, and look at what it doesn't address that people think it should address.



Section 3.2 tagset - They need several things that are not yet available. Some can be incorporated into tagSet-M; some cannot. Taylor wants the tags added to the standard.



Section 3.5 thesauris attribute set - Not sure how many rules he broke (when compared with doc for developers of attribute sets). Moen/Zeeman: what is the "dominant" attribute set referred to? The dominant set is the attribute set specified at the top, that semantic qualifiers must specify how inherited attributes are used. Taylor/Zeeman assumed it was the Utility set. Denenberg: the Utility attribute set cannot be used as the dominant set. Hinnebusch: that's a problem; it means you couldn't use just the Utility set to see what records were added to the database. C Lynch: we haven't explored this in much depth. The Utility set does allow you to do universally meaningful queries. We may want to think about whether we should allow the Utility set to be used as the dominant set. That would make life easier for things like the thesauris attribute set that need only the Utility set and a couple others.



Denenberg: but what if you want to use the Zthes attribute set with the ZBIG attribute set? C Lynch: there is some orthogonality between the Utility attribute set and sets that it can be combined with. The place where you get in trouble, and "dominant" doesn't capture this well, what if you are using three attribute sets? The Utility set is relatively orthogonal to the other attribute sets, which are capable of contradicting each other. Someone has to sort out the contradictions. Which attribute set should sort out the contradictions? Denenberg: either set or both sets could specify how to sort out the contradictions. We can allow the Utility set to be used as the dominant set, but does that solve the problem? Should we wait until we have a real instance of ambiguity?



Hinnebusch thinks the application-specific set (like the Zthes attribute set) should be the dominant set. If you have a single attribute set in a query, the notion of a "dominant" set does not apply. Denenberg: when parsing a query with attributes from two different sets and the usage of the two sets in combination are ambiguous or conflicting, then the one specified as the dominant set rules. C Lynch: make a statement like -- the thesaurus attribute set can be used as a dominant set, however its use is envisioned primarily in conjunction with other attribute sets and at present these are sufficiently orthogonal that you don't need to specify at this time the rules for resolving conflicts or ambiguities. In these cases we recommend that Zthes not be used as the dominant set. Doing something like this will provide guidance to users of Zthes and where they might expect to run into trouble. It also gives a "heads up" to other attribute set developers.



Section 3.6 - the values need to be renumbered.



Denenberg's intention is that there is a certain point in the development of attribute sets where, when you reach stability, you're ready to number them formally and get an object identifier and number the attributes. Hinnebusch: do the attributes that were developed for collections belong in the Utility set (e.g., broader, narrower)? Denenberg: we went through the exercise of trying to figure out whether the hypothetical thesaurus application was a good fit with the collection profile. Theoretically, we thought we would someday build the thesaurus profile on top of the collection profile, but looking at Taylor's Zthes profile it doesn't look like that's possible. Maybe we need some new element set names or use of espec. For example, if you had a relation type attribute within subrecord, you could search for subordinate (narrower) terms.



Zeeman: what does scanning this type of database mean? Taylor: as the profile stands, it's all about navigation. Scan is on the list of things to consider, but hasn't been done yet. Zeeman: Zthes has potential to help build highly navigable library catalogs, but it needs scan.



Denenberg: is the model here that if you retrieve a record associated with a term, then the subrecords that you get are a single level below and a single level above? Taylor: yes. You cannot get just the narrower or broader terms. To get only one of these, you need an element set. You can define an element set based on the value of the elements. Hinnebusch finds it strange that the relationship between the records in the database is not specified. He wants to be able to provide a term in a query with some kind of relation attribute that indicates narrower or broader. C Lynch: Taylor's model mimics what print thesauri actually do. Denenberg: suppose you have a large number of narrower terms and you only want the first 20, that's a difficult problem to do with present unless you employ espec, but it would be easy to do with a search.



The subrecord for a term is not as complete as the full record. C Lynch: the debate here is data models. Hinnebusch: will we end up with two profiles -- Taylor's and one that does what Hinnebusch wants? Denenberg: why would someone build a new profile instead of just adding attributes or element set names to Taylor's? Tibbetts: has thesaurus database that follows Taylor's model even though she doesn't use his profile. Taylor: the role of the profile is to provide a concrete way for you to think about what you're doing. His profile enables the retrieval of broader or narrower terms by getting the record and following links to subrecords.



Zeeman: this model becomes awkward when there are very large sets of related terms. Agreed. Why not do this with query? Denenberg: we don't have to change the data model here. Taylor will confer with the rest of his working group. Do we want two ways to do this within one profile, or two profiles? Zeeman/Tibbetts: look at MARC name authority files, which is athesaurus of names (places, people, etc.). The United States record points to 10,000+ names. Taylor concedes that there is an issue of concern here.



C Lynch: what you often want to do in practice is not just the immediate broader and narrower terms, but to retrieve the whole subtree. The way the recursion works here is across the wire as protocol transactions, which will be very slow. The thing to think about is whether its worth it to include here some other way to do this recursion without the performance hit. There are several ways to do this. Denenberg: an element set won't do this for you, but different schema might. The schema here defines a logical retrieval record with one level down and one level up. You could do this with a different schema name that defines a different or indefinite number of levels. Or have the single schema that defines an infinite number of levels and use an element set name if you want to retrieve only one level. Perhaps have different schema that retrieve different views (one record in each case). Taylor: but some of the records could be huge. Yes.



Hinnebusch sees another issue wrapped up in this. This is the third attempt to do a thesaurus profile. Taylor's group looked at the previous work. No one liked the approach taken with Scan. Denenberg wants to see movement on this quickly because ZBIG needs this. (They need a way to handle thesauri and they have so many different thesauri.) Next draft of profile should address the questions raised here. provide a path for extensibility and invite comment.



Taylor can open the Zthes list to broader discussion. Hinnebusch has thesaurus handling now that can be done with query, not present. Denenberg: Taylor will take this discussion back to the working group and suggest moving the discussion to the ZIG list (instead of just the Zthes list).



Section 4: future directions include support for poly-hierarchical thesauri, non-preferred terms, multilingual thesauri, thesauri version numbers, post coordination, Z39.50 date/time structure (developed to handle different calendars, B.C., etc), use of Scan, etc. Denenberg: be careful about the Z39.50 date/time structure. If the Zthes group decides not to use it, what is the implication for the standard?



13. X-Structured Information via Z39.50 (Francisco Queiros Pinto)



13.1. Formalizing a Z39.50 Profile in RDF

13.2. Exchanging Z39.50Profiles using Explain Facility



Topic postponed until next ZIG.



Recap of yesterday's discussion:



Stovel summarized yesterday's discussion: we need two different kind of semantic qualifiers on a name (type and role). If you're going to use more than one occurrence of a given attribute, you do it as a list. But we said that a list gives the server a choice of what matches its indexing. We want to change the architecture rather than the attribute set so that we can indicate the two kinds (type and role). Alternatively we can add one or two new attribute types to this attribute class. One attribute type would deal with properties, e.g, corporate name, personal name, conference name. The other attribute would be the relationship of the name to the work, e.g., author, editor, illustrator. Each of these would have the same kind of qualifiers, giving the server the choice of the best match for its indexes. Questions: if we have these two new attributes, what is a semantic qualifier?



Denenberg: semantic qualifier describes the semantics of the access point (type), the new functional qualifier describes the relationship (role) of the term in the record.



Stovel: what are the rules if there is a list of alternatives? Denenberg/Hinnebusch: same rules apply, server choice. Stovel: is the new functional qualifier generic -- should it be elevated to the level of the new attribute architecture? Hinnebusch: yes. Zeeman: by adding a functional qualifier we're allowing multiple levels of granularity.



C Lynch likes the splitting out of role, but wonders whether we should call it a "functional" qualifier or whether we should call it the "role" qualifier. Denenberg: Dublin Core folks have been floundering with trying to talk about types and roles. C Lynch does not really care what we call it, so long as we provide descriptive prose to make the distinction because the distinction is helpful. This kind of thing came up a lot in the early CIMI discussions. Another interesting thing to note is that at least some of the discussions around Dublin Core seem headed in this direction. Let's go ahead and do it. Denenberg: this will be version 1.1 and will not destabilize early implementations of the new architecture.

Hinnebusch: we need to rework the cross-domain attribute set document to indicate which are semantic qualifiers and which are functionality qualifiers, move date added and date modified to Utility attribute set. Raised question about the "contained in" qualifier. Stovel proposed "related resource" instead of "contained in." C Lynch: DC folks think of this in terms of compound objects. C Preston offered an example when no one could figure out how this might be used: find all the articles with X in the title in Scientific American. C Lynch: Dublin Core for years swept the whole business of data models under the rug. Recently they've gotten side tracked into this issue because various parts of the DC crew have decided that maybe they should have an underlying consensus data model and make it conform with or map to the work that the indecs project is doing (oriented towards rights management). There are a number of problematic DC elements; no one knew what they meant because of the lack of a data model, but as soon as you start talking with reasonable specificity about data models you move into specific domains and out of cross-domain searching. Godfrey Rust is working on INDECS which is concerned with rights management metadata, which required him to build a data model that captures the kinds of relationships seen in the music industry (e.g., linkages of subsidiary rights).



Hinnebusch: what do we do with the cross-domain access point attribute "relation" and semantic qualifier attribute that combines with "relation" that's called "contained in"? We can leave it where it is, move it into the Utility set, or strike it from existence. Denenberg: we have no choice but to leave it where it is. Stovel disagrees. Denenberg doesn't see any benefit to moving it into the Utility set. Agreed that if we don't know what it is, we shouldn't move it into the Utility set. Denenberg: we shouldn't strike it just because we don't understand it. Stovel: when DC gets ready to do their profile, they can tell us what it is. Hinnebusch: we can send this back to LeVan and ask for semantics. C Lynch: DC has a working group trying to develop a taxonomy of relationships. If in fact they can do that, then it's reasonable to take that set of values to characterize the relationships. D Bearman is working on the taxonomy. Moen: do we have to assume that each of these elements is an access point? Or is it perhaps additional information for the user. Denenberg: we decided that they all have to be searchable, though they don't all have to be supported.



Taylor proposed: let's keep the "relation" access point and scratch the "contained in" semantic qualifier until they provide a taxonomy. Agreed.



Creator, publisher, contributor, editor, are all roles of name. Publication date and acquisition date are roles of dates. Abstract and notes are types of descriptions. Date added and date modified need to be clarified by cross domain folks. Depending on the definition, they will be deleted or moved into the Utility set. Changed mind: decided this has to do with the date a resource was added or modified, not the database record. The date when the record was added or last modified remains in utility set. Hinnebusch will contact LeVan to create new version and post it.



Taylor: are we ready to assign official numbers to this stuff?



C Lynch: when we looked at this issue six months ago, we looked from point of view of relatively stable DC proposal which shaped a lot of our thinking. That's much less clear right now given current confusion. Because of that, unless we want to take leadership in this area (which Lynch doesn't recommend) we need to be adaptive as they zigzag. If they in fact agree on a data model harmonized with INDECS, the implications for Z39.50 are unclear. In the course of moving to that data model, they will need to finalize final data elements. They are discussing DC 2.0 which may not have a migration path or rational connection to the current DC elements. People are making fairly expensive commitments to DC.



The relationship between DC and the cross-domain attribute set is unclear. Some DC access points are combined together in the cross-domain set with qualifiers to distinguish the 15 DC attributes.



14. Attribute Set Developers Guide (Janet Hylton)



Current guide is a combination of information from what G Percival provided and the minutes from the Madrid ZIG.



Introduction provides the purpose of the document and the historical and technical background.



Section 2 provides an overview of the attribute set architecture, the role the architecture plays in attribute set development, and defines the terms, sets, types, values and class(es).



Section 3: add Zthes attribute set to section 3.1; remove the examples that are not compliant with the new architecture (e.g., CIMI, Extended Services -- what do we do with Permissions attributes?,GEO, Explain -- problems with some of the values).



Section 4 attempts to identify the logical sequence of work to be done to develop a new attribute set. Taylor suggested adding document sections to the Figure 1 (page 5). The text (end of document?) should indicate that sometimes the development of new attribute sets feeds back to the ZIG and may affect the design of the architecture or the cross domain or utility attribute sets. C Lynch wonders if it would be helpful to include early in section 4 a brief paragraph or two about the role of the ZIG and the Maintenance Agency to clarify that some attribute sets are managed by the ZIG and that others may develop independently (though they should inform/register with the ZIG).



Stovel: what about the maintenance of the attribute set after it has been developed? Developer should think about how the set will be maintained.



Section 4.1 - provide examples in text mapped to examples in Tables. Section 4.2 - important to have clear data model so can do accurate data mappings. Include outputs in all the figures in this section. Section 4.3 is actual creation of attribute set definition. Hinnebusch: we need to clarify the "dominant" set. C Lynch wants a more general statement such as "you need to think about how your new attribute set will interact with your set to solve problems; this will lead to issues of dominance…." In cases of contradiction and interpretation (when attributes are mixed on the same term), issue of dominance plays a role. Stovel: who decides dominance? Not the attribute set developer, but the person/client who does the query (and mixes attributes on the same term).



Big debate about whether developers can mix attributes from class 1 (new attribute architecture) and class 0 (old architecture) and where this information should go -- in the developer's guide? Yes. In each attribute set? No. As ZIG wit and wisdom (commentary)? Yes.



Stovel: how does an attribute set developer know whether their set can be the dominant set? C Lynch expects that in most cases people developing disciplinary attribute sets will want them to be dominant. Hinnebusch: we had an example today where this was not the case (Zthes). Lynch: but we talked Taylor out of that. More quibbling. Lynch: supposed you wrote an attribute set where you said that unspecified language values should be French; depending on whether that set is dominant or not, it has semantic implications. Stovel: the architecture explicitly disallows the specification of defaults. Denenberg wants the developer to say whether or not their set "may" be dominant. Stovel and Zeeman want the new attribute set to be silent on this topic. If the result doesn't make sense, it's a malformed query. Denenberg: if that's true, we're changing the basic gist of the architecture document. Zeeman/Hinnebusch: the architecture is wrong.



C Lynch: there's a sense in which there are practices that you can follow in defining attributes that create precedent conflicts. Designing new attribute sets that are dominance insensitive is a good thing because it reduces ambiguity in the query, but we don't know how to do this yet. The whole issue of dominance came up because we didn't know how to create rules that avoided all this.



Section 4.4 - develop attribute combinations



Section 4.5 - updating Explain to the new architecture. Hinnebusch: what if the person who is developing the attribute set does not have a server? Scratch from this document. Agreed.



Section 5 - lessons learned from previous attribute set development. Remove the DC attribute set. Maybe we can add bib-2 lessons learned from Stovel.



15. Update on Profiles (Moen)



15.1. CIMI Profile



Moen: at the Madrid ZIG spent a day and a half coming to agreement with the Aquarelle profile. They signed a memorandum of agreement indicating that we merged the profiles and now have one international profile for museum information (November 1998). Submitted to become internationally registered profile (IRP). Currently in two-month comment period. Denenberg will announce that it's out for IRP comment. Will make any technical changes requested. Goes to level of SC4 in ISO hierarchy.



Will begin reviewing CIMI in light of new attribute architecture (not wait for version 4) to take advantage of the Utility and cross-domain attribute sets. Hinnebusch: we have deprecated bib-1 and are promoting bib-2. How will this work with CIMI? Will everything get new OIDs? Not clear to Denenberg that they have to get new OIDs, but they can if they want. Hinnebusch: there are CIMI implementations so they will definitely need new OIDs. OK. Does CIMI have a controlled enough implementation base to handle the migration? Not sure. Hinnebusch: how much cross domain searching do the CIMI folks do with the library folks? Finigan: lots. So the CIMI work has to work in concert with bib-2. Moen: this is a resource issue for CIMI; not sure when they'll get CIMI to the new architecture. The one area that's most critical is the cross-domain searching, so if that requires the new architecture, that will be the key source of pressure. See CIMI web site.



15.2. Z Texas Profile



Texas work started last August after a series of workshops supported by the state library. That work and the CIC and other reports about interoperability came out indicating that people had either been oversold on what Z39.50 could do or had trouble with their implementations. Result was to prepare a set of guidelines for using Z39.50 in Texas. At the same time, there is effort on the national level for interoperable bibliographic searching. Texas approach has been to identify a set of user tasks in OPAC searching and focus on them so that people can quickly begin seeing the benefits of Z39.50. Currently have draft release 1 available that specifies the role of the profile and the modular structure to identify functional areas with requirements that the specifications need to support. They have two categories -- for end users and technical services users. They've had three meetings to discuss requirements and the first draft of specifications. A number of vendors were present to discuss the specifications. They included suggestions to have Scan and handle holdings information. Moen is now revising draft 1, to include OPAC holdings schema and Scan. Want to have it ready in April and begin implementation fourth quarter. He wants a formal testbed to show interoperability and reality check of specification. Talking with folks interested in international bib profile. Looks possible to move forward on this.



Barbara Shue had teleconference with participants from Australia, UK, Canada, US and France. They found that there was not input from work that is going on in the European community, so she asked them if they'd like to participate in the next round of work. Decision was to use the Ztexas profile as the basis to start with when the next version comes out. International community will review it to see if anything is missing. Will have more teleconferences and discussion online rather than face-to-face meeting. This is the ZILS group: Z39.50 Integrated Library System. Hinnebusch: what is the scope or mission statement of ZILS? Does it include acquisitions, circulation, etc. Moen: concern about the scope was expressed at the Madrid ZIG. Currently the focus is search and retrieval, not update, etc. The scope is in part being inherited by the use of Ztexas doc.



Hinnebusch: not sure anything can be done, but the Ztexas profile is trying to drive the market. The profile says that certain things "will be supported," but it doesn't define "support." Moen wants formal interoperability testbed so they can operationalize support. ZIG has never really defined "support."



Moen: state of Idaho is releasing proposal for a web Z39.50 client gateway that will be referencing Texas specifications. Please read the spec and provide feedback.



St Gelais: are there plans to adopt the Texas profile as the international profile? Please clarify the work of the ZILs group. Barbara: there is the need for an international profile, rather than just nationally (the way Canada is approaching it). There were discussions in Washington D.C. and Madrid. The outcome of the Madrid meeting was that there was interest in an international profile, but no one was willing to take it up. Everything coming out of the National Library of Canada must be bilingual. Don't know whether the Texas profile will become or grow into the international profile. Denenberg: the discussion that led to the "ZILS" name recognized that the scope was broad, but realized that it would get the attention of vendors. There seemed to be general agreement that that was a good name for it even if the beginning focus was strictly search and retrieval. Zeeman: certainly this is being driven by international utility and interest in virtual union catalogs. Hinnebusch: these folks generally don't care about integrated library systems. Moen: will attempt to converge Texas and international profiles rather than proliferate profiles for the same functions. Texas profile will be built on best thinking and previous work in other profiles.



Hinnebusch is concerned about the name "ZILS profile" because the library community is confused about Z39.50 but has clear ideas about what an integrated library system (ILS) does. The name of the profile could be misleading, e.g., include searching patrons, doing remote circulation, etc. P Henrik is involved; they tried to do international profiles, but would not start with the Texas profile because they are aiming for higher level of functionality. They are building on French and British projects to align profiles, e.g., CENL (Consortium of European National Libraries).



Hinnebusch: do we have a publicly available list of profiles? Denenberg tries to keep a current list, but sometimes he hears about one not on his list. People have to tell him about them. P Henrik will try to send names of profiles to Denenberg, but they are always works in progress. Denenberg: that's OK; having a profile listed on his web page doesn't imply anything about the status of the profile. Slavko is looking at the profiles to do a reality check with an eye towards retiring profiles that are no longer needed. Denenberg offered to remove some profiles if people want, or organize it to indicate status more clearly (e.g., which are IRPs). The web page is informal.



C Lynch: some of the profiles have standing completely independent of the ZIG. The ZIG really does not own the profiles, though it's useful to keep a registry of them and to know which are historical and which are "live." Is it time to put the ATS profile to rest? It had a political genesis and served its purpose. Stovel: for the ones like ATS for which the ZIG has some control, we should include the status in the profile doc itself, but not remove them from the page. Hinnebusch: the only reason the ZIG "owns" these things is because no one wanted them. C Lynch: it would be perfectly rational for a library organization to create a profile for the minimally acceptable Z39.50 library catalog. Slavo saw an RFP a couple months ago that only referenced the ATS profile. Denenberg: that's reason enough for us to examine profiles regularly. Moen's work will probably produce something more suitable for today.



Agreed to deprecate or remove the ATS profile when we have something better to supercede it. The ZIG must get out of the business of writing profiles for the library world. As far as the ZIG is concerned, the ATS profile is historical, but the ZIG must be careful about passing judgements on profiles. C Lynch: why don't we modify the ATS profile by adding a preface note about historical context when ZIG worked in application areas in addition to protocol issues; the ZIG no longer works in this area and urges potential users of this profile to evaluate other profiles coming out of the library community. Perfect. Agreed. The ATS profile is orphaned, not expired or deprecated.



16. ZSQL update



Finnegan: ZSQL testbed is underway, starting with single server then moving to distributed environment. Next meeting is early June in New York. Testbed to be completed by end of year. Will report to ZIG in January 2000. Maintenance Agency page points to her testbed page.



17. Z39.50 Over HTTP



Denenberg's proposal. Background: LC is joining W3C (Denenberg will be the LC representative, at least initially). LC is digitizing collections, creating metadata groups, studying rights issues, etc. Denenberg went to W3C's query language conference in December. The impression he left with was that they have no idea how to develop a query language for the web, and that the web community has more interest in Z39.50 than most people at the ZIG realize. Three primary reasons people think you can't use Z39.50 on the web: the web wants ASCII encoding, a protocol that runs over HTTP, and statelessness. C Lynch: you just raised technical or architectural objections to Z39.50. But the most overwhelming sense he has from that workshop (second hand) is that people arrived with a solution, then asked what's the problem. He can't figure out what problem they think they're really trying to solve. Many of the problems cited were things that currently don't happen on the web. They had big time companies there who were convinced that this stuff was going to happen on the web. The discussion was disjointed with no proper follow-up to develop focus. They did talk about a lot of different problems and scenarios that he didn't think these companies really think would ever happen.



C Lynch worries that Z39.50 over HTTP is a mechanical issue. Denenberg: the problem is a comprehensive information retrieval protocol for the web. Hinnebusch: the problem is that the library community has huge databases of good information and currently there is no way to integrate those into the generic web querying that people are doing instead of using libraries. C Lynch: this is a real problem, but it's political, not technical. Some folks have exported metadata to web search engines. The difficulty here is to convince web search engine vendors to trust metadata that they didn't extract.



Denenberg: the real problem is the Z39.50 query and other queries, which have no protocol for being expressed on the web. We should create a profile that solves this. Denenberg wants to remove objections to penetrating the web with Z39.50. P Henrik: the problem may be firewalls. Agreed.



Denenberg and five or six other people collaborated on the document. W3C internally did not have consensus on the problem. The problem is the firewalls and the transport mechanism. Someone should present a paper at the next W3C workshop. C Lynch thinks that is extremely rational; once we get stability on how to transport Z39.50 over HTTP we need to put it out as an RFC. Agreed.



Denenberg needs participation from folks with expertise with HTTP. Are there specific technical comments on the approach in the document. Stovel: if you narrow the focus to firewalls, you don't need some of what's in the doc. We need a document that we can point people to when they raise any of the 4-5 issues he raised. Hinnebusch: you don't do that in a profile. You can do it in a discussion paper (rename the doc). It would work better to state your purpose (rather than leave it implicit), but not in this venue. We do have a problem with firewalls, and there are those other issues to be addressed. Other groups are working on ways to push their stuff over firewalls (e.g., pointcast). Let's look at what other groups did to solve the firewall problem.



Denenberg wants comments on his doc from folks who know HTTP.



18. Future Meetings Schedule



Next ZIG meeting in Stockholm, Sweden - August 9-10 tutorial, August 11-13 ZIG meeting

Tutorial on basic introduction and at least two toolkits August 9;

what's happening in Z39.50 (e.g., Explain, attribute architecture, ZSQL) August 10.