The day began again with food, conversation and toe-tapping music. Our DBC hosts certainly treated us well.
To solve the original problem with the OPAC record (no mechanism for specifying what kind of MARC record is within it), LeVan proposed a way to specify the national MARC record through OIDs. A limited set of primitive element set names is also needed to specify summary or detailed holdings.
Henrik was concerned that this decision was not the best way to use the protocol. He expressed suspicion that the ZIG is losing momentum and making decisions that are more interested in protecting investments in obsolete v2 implementations than trying to find the optimal and more suitable solution available in v3. D Lynch attempted to clarify the issue: whether we should use separate OIDs (separate record syntaxes) for (multiple) OPAC records or use GRS and e-spec. Should the record be ASN.1 or GRS? Henrik's concern was not the details of this decision, but the apparent trend to protect old v2 software. LeVan requested a technical analysis of the pros and cons of v2 and v3 solutions. If it is a no brainer to do this with the more sophisticated tools in v3, he would be happy to do that, but the combination of OID and element set name is easier to implement (simpler to specify) than e-spec.
Denenberg: the issue is not just e-spec, but the question of why we chose the approach that we did and perpetuating the OPAC record. If GRS had been around back then, we would never have done OPAC record. There was disagreement about whether we would have done MARC in GRS. Zeeman: this is a red herring. OK. GRS is for doing advanced retrieval. Denenberg: the problem we are trying to solve is advanced retrieval. GRS was developed to solve the problem of identifying the type and format of an individual element within a record, so it is well suited to this task. But people are afraid of GRS. So we created simple GRS if you use just the essential features. We should re-examine that. But people are also terrified at the concept of re-evaluating GRS. We do not need to start over with GRS, but we should fine-tune it to illustrate that it is not terrifying. Had we done that earlier, GRS would not have been so frightening. Henrik observed that this was going way beyond the specific solution for OPAC currently under discussion; his point is that American developers appear to have repudiated Z39.50 v3, which is what European developers have implemented.
Hinnebusch: if you look at the profiles being developed (other than ATS), all of the profiles except GILS use v3 features. Americans have not repudiated v3. The discussion about retrieving holdings information, which started back in v2 days, may have contributed to a perception that the ZIG does not want to do v3 any more. There is a heavy investment in systems that are already providing bibliographic access, and yes they are older systems, but America is moving to v3. D Lynch: this discussion is very general and extreme and non-productive. Re-visiting GRS would be horrible; it is fine and used the way it is. If we chose to use e-spec, then we need to profile how it will be used in this circumstance. We could, for example, set a default variant. But once you say that you are using e-spec to do this, does that mean that servers are required to do all of the e-spec stuff? That's why it still seems to be simpler to use OIDs. How much mechanics do we need to bring to bear here on what seems to be (at least for now) a simple problem?
Pedersen: our discussion is proceeding at different levels. Henrik is talking politics, not concrete solutions. If we look back to 1992 and 1993, European projects were based on the SR standard, but we saw a problem with that because developments in the states were going in a different direction. So we looked at Z39.50 and heard that v3 was the future. We took that to heart and most European projects are now based on v3. Even though we probably can come up with technical justifications for doing OPAC record that are compatible with v2, it is very important politically in Europe that we move ahead with v3.
Wibberley: the v3 facilities are the appropriate tools; using v2 tools is a cludge. LeVan: show me how using the heavier v3 tools are more advantageous than v2 and at what cost. The amount of prose necessary to describe how to do this in v3 will be greater than v2 and the implementation will be harder. All agree, however, that a v3 implementation will be more flexible. Waldstein heard rumblings that there is a certain external within ISO to do this, and if that is true, then the OPAC record may be the hard way to go. Hinnebusch: if there are overwhelming political reasons to cast the record in GRS rather than ASN.1, then we should do that. Are there any implementors who would object to using GRS?
Pieter Van Lierop: GEAC has been implementing U.S. MARC OPAC records based on an informal discussion at the April ZIG meeting. Hinnebusch asked GEAC's preference: to modify the OPAC record they are using or go with GRS? Peter wasn't sure of GEAC's preference, but he was sure that they couldn't sell library systems without holdings. Tibbetts said that the University of California has no vested interest in OPAC or GRS, but the final solution must be able to get bibliographic information and holdings in one record. Hinnebusch: technically the issue is not v2 or v3, though politically (perceptually) it may be. If we do it in v3, does that solve the perceptual problem?
Denenberg has the opposite view from LeVan: we are not using enough of the machinery that we have to solve the problem. He thinks e-spec is an appropriate part of the solution. It will be easy enough to profile e-spec, but does that imply that a server has to support the full capability of e-spec? No. We can simply profile e-spec for this particular problem and not incur the full burden of the capability of e-spec. Some of the Europeans would be willing to help with this profile. Hinnebusch, however, supports LeVan in the sense that our discussion is only theoretical until someone sits down and does the analysis. He thinks that having to use a profile rather than looking at the standard to do this has been a problem in the past.
Waldstein: both papers he read on holdings required a v2 solution. Hinnebusch: both papers were crafted by the National Library of Canada (NLC) based on discussions with vendors. Zeeman: any decisions that require NLC to migrate to v3 before they are ready will be unacceptable. There is no hostility on the part of NLC to moving to v3, though there are problems with having a timetable imposed upon NLC to do this. D Lynch: this is an eloquent argument for doing holdings in v3 (if the v2 and v3 solutions are about the same amount of work). Maybe we can and should promote the implementation of (or move to) v3 by requiring holdings to be v3. Davison: who are we trying to serve here? We are trying to serve not just the vendors but the institutions that they serve because there is a pressing need for interoperability on retrieving holdings. We need the fastest solution, which is not only a question of implementation time, but decision-making time. We do need a forward-looking solution, but if we re-open the discussion, we lose four months (until the next ZIG meeting). Denenberg: the proposed decision (OPAC record) also requires work; for example, the casting of element set names and ASN.1 will also take time. Hinnebusch disagrees. LeVan: drop this!
?? (Swedish ?): if the time slips much, each nation will craft their own solution which will make problems down the line. We need a solution now, even if it is not the best solution. Hammer: a v3 solution should not be perceived as just a political move; it is the better solution because it is extensible. J Pearce is concerned about the interoperability of linked metadata sets, not just holdings. The danger is: will you need 26 rights management things to go with 26 OIDs? Vendors may have to do the GRS solution anyway and soon if we want international interoperability.
Hinnebusch: where do we go from here? LeVan: we have agreed on the architecture, but not the technical implementation. If there is a strong feeling or clear technical superiority, then go with it. Davison: we need the architecture that was agreed upon documented. Zeeman will have it documented by mid-September. We need to go with the decision made last night, where Hinnebusch and Fay Turner do the elements and ASN.1 for the OPAC record solution. If there are people willing to document the GRS and e-spec path in the same time frame, fine -- provided that there is a mechanism to decide between the two. Denenberg agrees to document the GRS and e-spec path. D Lynch and Hinnebusch suggest an interim meeting. Stovel: the elements have to be done regardless of which path is chosen. Agreed. Loy: would a GRS solution without e-spec be acceptable to NLC? Or can we have both paths? Denenberg: if we find out that e-spec is hard, there could be a GRS solution that works with v2 and v3. Zeeman had no objection to that approach. Hinnebusch: would that be acceptable to everyone? Peter (GEAC): it's a pity that no one talks any more about U.S. MARC holdings records or UNIMARC; GEAC also implements cataloging, so people should also think about Extended Services cataloging for structured holdings. Davison: do we have a mechanism for making a decision so that implementors can begin working? LeVan: but you cannot say the kind of MARC record you want with GRS without e-spec. Denenberg: but you can use a canned element set name to emulate e-spec to do this in v2. We can come up with something concrete within three weeks; we can have an interim meeting. Hinnebusch: we can do this on the list; we have a compromise solution that will work.
Gatenby: everyone should be looking at the ISO data elements. Hinnebusch will put the ISO data elements doc on a secure server accessible by the ZIG.
Zeeman is not sure that we a mechanism for reaching consensus on the list. Hinnebusch: anyone who has reservations about the solution posted to the list, please make it known on the list! Hinnebusch will poll the ZIG list to determine consensus. Denenberg will make the status available on the Maintenance Agency web page. Wibberley: specify a clear deadline for review. Hinnebusch: we will have a polled decision by the end of September.
The union cataloging profile was discussed at several previous ZIG meetings. The focus today is on areas that have been developed since the last meeting. The purpose is twofold: to integrate union catalog maintenance into creator workflows and to allow all creation and update processes to be done within Z39.50. We are defining procedures for concurrent update, record locking, detecting duplicate records, reviewing records, merging records, and global update. Three of these procedures have been elaborated in the profile based on discussion at previous ZIG meetings. The profile also clarifies the conditions for use and provides diagnostic messages.
There was much discussion in the past about whether the profile is simply for maintaining a union catalog, or whether it is more generic with broader application for cataloging with Z39.50. The current draft of the proposal tries to identify those parts (business transactions) that are not specific to a union catalog, and to identify priorities for staging an implementation.
The effects of the profile on the base standard are:
Hinnebusch sees a collision problem when two clients issue global updates on persistent result sets (same records). The server must be extremely careful; maybe we should not use persistent result sets. Wibberley: is the intent to lock those records selected for global update until the change is made? J Pearce assumed that a record-locking scheme could be used. Hinnebusch: we will need a global lock at the result set level. Gatenby: records are locked at the time of the present. Zeeman: the whole concept of global change with Z39.50 is troublesome; his server will not support it, though the client may. But clearly there will be servers prepared to do global change as described here. There was concern about minimizing the danger that global update server poses. Making global update apply only to a result set selected by the client is a safer approach. Whether the result set should be persistent or transient is irrelevant: Can you create the result set one day, and do the update sometime later? Yes. LeVan: a persistent query may be better than a persistent result set. Gatenby: but then the client doesn't know how many records it will be touching. LeVan: the client doesn't touch records, the server does. Zeeman: a persistent query has to be a single statement.
Global update external structure includes the following mandatory elements, none of which are repeatable except fieldToBeChanged: persistentResultSetPackageName, numberOfRecordss, creationDateTime, fieldToBeChanged, oldString, newString, globalUpdateType (e.g., whether you are changing whole field, subfield, etc.), case (Boolean). Hinnebusch assumes that because these are external structures, they may be record-syntax specific. FieldToBeChanged is optional. ??: is it more important to specify the conditions under which the records should be updated, rather than the specific records to be updated? There was concern about performance if the update is delayed from the time the result set was created. Hinnebusch: the query that creates the original result set is the specification, locking is the only safety against collision, and yes there is a performance hit to do multiple updates. LeVan: locking can be a non-issue, for example, the server may only do updates on weekends when the library is closed.
C Lynch: it is useful to say what these services do. There is any number of protocol mechanisms that can do these tasks, but we need to clarify what these services are expected to do. He is puzzled by global update. You really need to lock the database so that new records coming into the database are not missed (if you want to change all that meet the conditions) when there is an arbitrary time delay between the creation of the result set and the update. Are we defining this as a traditional update or as some exotic two-step animal? Needleman agrees with the need for clarification. Wibberley: the global update capability is not peculiar to union catalogs, but is a generally useful feature. His only request is that oldString and newString be generalized to oldValue and newValue so that update is not limited to only textual information. LeVan: do you really anticipate allowing Z39.50 to do this? This is really harder than we first thought. Please consider doing this some other way. Hinnebusch, LeVan and others: this should be implemented as a synchronous protocol. Denenberg: this will have virtually no impact on the protocol. Agreed. ??: but will doing global update with Z39.50 replace the need to do this many different ways?
Hinnebusch was concerned that word will get out that Z39.50 is doing global update, so people will begin doing this elsewhere. C Lynch: people from the traditional DBMS community may stumble across this and respond that the IR community doesn't know what they're doing; Z39.50 global update will create an unpredictable database because the semantics are unpredictable.
Gatenby sees global update as filling out a form to do a batch process. Hinnebusch: so can we recast the profile that way? Currently the implication is that this change is going to happen automatically. ?? suggested that global update be removed from the profile at this point and brought back to the ZIG later after the problems have been reconsidered. J Pearce: what is the difference between a global update request (batch process) and a record replace? Hinnebusch: because record replace treats each record atomically. Zeeman: we are losing track of the whole basis of the Update Extended Service, which is merely a suggestion to the server to make a change; there is nothing in the protocol rule that says the server is required to do this. Wibberley: if you use the Extended Services facility to request synchronous behavior (for example, wait or do not wait), the server is required to respond that it will or will not do whatever. One way that you can use Update is to request a synchronous database transaction. The search created the result set that identified the records that you want to update, so there is no distinction between the individual update and the batch job. Hinnebusch: except that with record replace you actually transfer the record to the client; with global update, the client has only pointers (a result set) and no mandatory lock on those records. Zeeman: but it is the server's responsibility to do what it has to do. ??: though global update is a traditional DBMS task, bringing it into Z39.50 makes sense. Finnegan was concerned about global update to several distributed servers: how do you do consistent control across servers? Zeeman: the fundamental assumption of Z39.50 is that data belongs to the server, not the client. Hinnebusch: is this an issue of needing to clarify the function? Is global update a request for a function or an order to do something? He still thinks we need to make locking mandatory. Wibberley clarified that the current proposal is intended to do a global update of a single database on a single server because Z39.50 does not allow querying of a virtual database that is in reality a cluster of multiple databases on multiple servers (which would be a dangerous scenario for global update). Dekkers said that too much is put into this and global update is a misnomer.
C Lynch suggested renaming this something like "batch edit and replace," and specifying some clear semantics that each record in the result set is serially edited by replacing oldValue by newValue after which an attempt is made to update it in the database if the record had not changed. The semantics will need some language about whether the record is locked or whether the record in the persistent result set is the same as that in the database. We also need some caveats like: "warning, this is not like the DBMS task of global update." We need to aggressively alert people that this is not an atomic database operation. Then it becomes reasonable and rational in Z39.50 as an Extended Service. Zeeman: those caveats need to be placed everywhere in the Update service. Denenberg: there are some caveats in the Update service in the standard. The consensus is that we need another name than global update. The update applies to the records indicated by the result set, which should make it sufficiently clear that this is not a global update of the entire database. But we have not specified what the task package actually looks like. Zeeman and Denenberg disagree about whether there is sufficient weazle language in the Update Extended Service in the standard.
Gatenby and J Pearce will revise the profile based on comments about global update. Hinnebusch: do international strings in oldValue and newValue enable you to specify the character set identifier and language? The global update external itself is a sequence. There was brief discussion about changing multiple different things and agreement to do this in multiple Update requests. Should you be able in one request to change two different oldValues in the same field to the same newValue? Should oldValue be repeatable? Wibberley: if you want to change multiple things in one record and the server can do only one of these things, should it do what it can or do nothing? D Lynch: the ZIG is over engineering (again) what it does not understand. LeVan: if you give a mouse a cookie, he is going to want a glass of milk! Ha ha. Denenberg: let's just fulfill the requirements of Gatenby and J Pearce.
Stovel: how many implementors have implemented DIAG-1 as opposed to BIB-1? Only Wibberley raised his hand. Denenberg: in the past we discussed having different diagnostic sets for different facilities, but instead we put them all in Bib-1. Should we continue with that or do something different? Wibberley likes idea of doing a table to group diagnostics by facility, even if all diagnostics are kept in Bib-1. Denenberg: later on in our development we did prepend an indicator of the facility, for example, ES for Extended Services diagnostics. Should we prepend to the older ones so that we can sort them? Stovel likes Gatenby's proposal of both a long list and a sorted (classified, prepended) list, because that way diagnostics could be in multiple places as appropriate. Waldstein considers DIAG-1 a good idea, but he has not implemented it. Denenberg: people use Bib-1 and AddInfo, not DIAG-1. There is no intention to make AddInfo machine processible. Waldstein: if a server sent him DIAG-1, he would do the work to parse it. Stovel: what is the purpose of AddInfo? It is text to be displayed for the user of the client. Wibberley (admittedly a GRS bigot) would like a more general structure for DIAG-1; DIAG-1 ASN.1 is inflexible. A more flexible way of parsing diagnostics would be more helpful. Denenberg: using GRS schema and diagnostics would not require changing the standard. Wibberley: that is the form he would like for making diagnostics that can be parsed. Hinnebusch: we got off the subject of what Gatenby wants. No one cares if we restructure DIAG-1. Denenberg: if we do want to talk about the mechanism for progressing diagnostics, do we want to do it through Bib-1 or DIAG-1? All agreed on Bib-1. People should send them to Denenberg and we can discuss them at the next ZIG.
Gatenby will share her classified chart with Denenberg, who will make it available as an auxiliary list on the web. Peter (GEAC) is using Bib-1 diagnostics; when he cannot find a relevant diagnostic, he sends error 100 with English text. He will send these to Denenberg for consideration. Denenberg: it is only the AddInfo that is not intended to be machine processible. If you have errors that contain text that you want to display to users, then error 100 is the way to go. If you feel that these text messages should be enumerated so that the client can look them up in a table and figure out what to display, then we can do that.
The ZSQL testbed will have a front-end web GUI on top of a Z39.50 client which will query a Z39.50 server on top of different kinds of SQL databases (e.g., Oracle, Informix, TexPress). The URL for draft-5 of the proposal is http://www.dstc.edu.au/DDU/ZINC/sql5.htm
The ZORBA project is looking at information retrieval in an object-oriented environment and trying to find an IDL that will sit on top of Z39.50 existing services or any information source (for example, X500, SQL or other database source). A draft will be available within the month. The generic IDL will sit on top of any information source with an optional extension for full Z39.50 access (via a gateway). The URL (Finnegan thinks) is http://www.dstc.edu.au/DDU/ZORBA
Current status: CIP specification 2.2 was ratified by PTT in January 1997. But it soon became apparent that they needed to consolidate. The final version will be 2.4 or 2.5. The current plan has identified issues to be addressed in technical notes, most of which are written and will be reviewed next week. CEOS wants GEO clients to be able to search CIP databases and vice versa. CIP is Z39.50 v3; GEO is v2. Issues include alignment of the CIP profile with the GEO profile (and other profiles; CIP has been aligned with the collections profile since the last ZIG meeting), the ordering scenario, revisiting the concepts of guides and local attributes (Explain), handling theme collections, and other issues like the translation of Z39.50 search requests into database requests and performance analysis. The final profile will be submitted to TC46.
A guide is a textual description. PTT wants to simplify the guide concept, flatten the guide collection hierarchy, and harmonize the guide concept with Z39.50. Theme collections contain subsets of other collections; they are working on defining conditions that create these subsets. CIP has about 250 attributes, but participants will have local attributes as well. How will we discover these local attributes? Should they define local attributes fully in Explain, and/or fully or partially in a collection database? Initially they were very enthusiastic about Explain; now they are less so. They are modifying the order concept to deal with connecting and routing orders to multiple order handling systems. PTT takes ordering very seriously because NASA and other participants want to support ordering. CIP will send orders directly to the information provider, not broker the process. Orders will be primarily image data sets (unprocessed raw data). The CIP attribute set is published on their web site. There has been user demand for things like dates, etc.
Future directions for CIP include development and testing of prototypes by various participants (lessons learned will feed into the next revised specification for release B), liaison with other groups (like ZIG, OGIS, CCSDS, MTPE), and possible registration as an IRP if deemed to be worthwhile. They hope to have stable release B in April 1998. The URL is http://ceos.ccrs.nrcan.gc.ca/taskteam/cip.html
Some directions are obvious. As part of the philosophy of not doing work in the ZIG where other communities of experts are doing it (or more sensible choices for doing it), we need a scope statement. For example, how far do we go with Bib-2 until we get into indexing and abstracting databases and way past bibliographic databases? We need an attribute set specifically based on the Dublin Core work (once that work is sufficiently mature), but it is not clear where we need to go with this. Philosophically there is growing consensus within the ZIG that attribute sets should be developed and maintained outside of the ZIG by people with expertise in that discipline or area. Denenberg would pass out OIDs as appropriate. Hammer: independent of GILS, when are we ready to move over to the new attribute architecture? Who will do the work?
Denenberg: if there was a Dublin Core attribute set within Z39.50, how do you foresee it procedurally ever moving along? C Lynch: there is probably more intersection than you might think between people involved heavily in Dublin Core and organizations with substantial expertise in Z39.50; he foresees Dublin Core folks eventually saying they need a set of search attributes. Stovel: there will be implementors who have to deal with this. C Lynch thinks OCLC will raise this issue. Denenberg: Dublin Core might not be the best application for testing the new attribute architecture. CIMI is not ready to do it. Will GILS do it? It's not clear that GILS is still interested in Z39.50. Hammer: GILS is still interested in Z39.50, but is discouraged with the operation of the ZIG. Waldstein, Hinnebusch: it may be premature talking about the new architecture and how to move to it when the document has not been published on the web. C Lynch will attend the Dublin Core meeting in October and ensure that a Dublin Core attribute set gets on their agenda; he will post any consensus they reach. Denenberg: maybe we can take some action before the next ZIG meeting. If C Lynch sent Denenberg the notes and the doc he would try to post the revised version so that we can start getting feedback. If there is lots of controversy, then we can't do anything until we discuss it at the next ZIG meeting. However, if there is no controversy, maybe we can start sooner. C Lynch will try to get the revisions done by mid-September. Waldstein: does the new attribute architecture assume type-1 query? Yes. It assumes a version 3 environment where we can intermix attribute sets. J Pearce: Dublin Core is a bibliographic data element set and any group that is assigned to do a bibliographic attribute set should encompass Dublin Core, which may be more than one attribute set because of different local qualifiers. Stovel does not think Dublin Core people see these data elements as an attribute set; their approach is very different; she wants a Bib-2 for MARC library databases that is different from the Dublin Core. Hinnebusch: C Lynch said that NISO may be willing to address this, but they are a national body. Stovel: soon NISO may take members from anywhere. C Lynch: the Director of NISO (Pat Harris) did say that participation in the various committees is open to anyone. Hinnebusch: let's move that forward.
All of the European Commission projects are required to put up web pages. See also http://www2.echo.lu/libraries/en/software.html. They had more than 50 library projects funded, and most of them were using Z39.50. The Commission is currently negotiating an ONE-2 project; the general scope is to extend the functionality of the ONE project in advanced retrieval areas, holdings, circulation, etc. Hammer: many new projects are not technical projects, but groups of users requesting new and interesting things. ?? is concerned that user expectations are higher and that we may not be able to deliver, perhaps because we over sold the standard. Hammer: yes, users keep us on our toes.
Denenberg will create a web page with links to available software.
German Library Project Status Report (Rusnak)
Denenberg: can we reference this on the web page? Yes, she will send him email with a pointer to their web page. Peter: do they use ILL protocol for Item Order? Yes, they are embedding ILL in Item Order. Zeeman: how are you delivering holdings? They get holdings information from a ZDB database (German national holdings database), which contains clear codes for the different libraries and the shelf location within the libraries; these codes are included in orders sent to libraries. They use two searches, one to retrieve the bib record, and one to retrieve holdings. They may switch to a holdings database using ISSN. Zeeman: what record syntax are they using to retrieve holdings? Holdings are delivered using SUTRs. Pedersen: do you have an Explain service? No. Germany is a participant in ONE, but otherwise have not yet implemented Explain service.
When the PTO started the Z39.50 project, they had some automation in place with direct access to a mainframe containing about 2,000,000 patents and growing by 200,000 annually. They wanted to implement Z39.50 on their local area network for technical and political reasons. Technically, they did not want to depend on only one text search engine in the future; they wanted to diversify and use different search engines in parallel. The Academy wants to teach only one procedure regardless of the back-end search engine. Politically, Z39.50 is the information retrieval standard and, as a government organization, the PTO has to support standards. The expectation or perception was that industry would simply take over the development of different clients, search engines, etc., and U.S. PTO would simply be a user. Three years later they realized that there was no off-the-shelf Z39.50 client and server software that would have all of the features they need. They purchased the MSI/Messenger server developed with v3 and looked for a fully compatible client. Not finding such a client, they had to spend time and money to develop it. Then they discovered that some of the patents were too long for the client. The CGI group (Zeeman's group) helped them develop segmentation. Another formidable problem was TCP/IP and Novell. Not all flavors of TCP/IP worked with their client; for example, packets were different sizes. Problems were also created by a mixture of different hardware. Eventually they started to question the wisdom of using Z39.50 at the PTO.
The life cycle process at U.S. PTO resembles a ZIG meeting: lots of people offering and recording ideas with no priorities or fiscal responsibility. Anyone can promote requirements, so they had 450 requirements. Some of them were too expensive or not needed, and some stemmed from misunderstanding. Nonetheless, it is almost impossible to get rid of requirements or to add new requirement later in the process. They learned that, instead of putting the user in the driver seat, it would be better to provide two or three options for discussion by the user. Providing ethical indoctrination and technical training to new examiners was also a problem, so they decided to do Computer Based Training (CBT). But developing CBT required detailed knowledge of the design of the client. The upshot was that they were unable to deliver CBT at the time when they deployed the client. The gap had to be bridged with paper.
After four months of operation, they went through the log of complaints at the Help desk. There were only six complaints: four were technical but had nothing to do with Z39.50. Then they went through the MSI log and found out that out of 2,200 potential users of the Z client plus many web client users, only 342 sessions in May, 259 sessions in June, and 459 sessions in July were Z39.50 -- an average of only sixteen user sessions a day. Lesson learned: do missionary selling of the Z client in house.
Zeeman: are the other users still using the mainframe client, which has been at PTO for 15 years? Mazur thinks so, but does not have any vehicle to monitor them or gather statistics. Hinnebusch: the issue may be speed, no matter what people say about GUI and the web. Mazur: not sure whether foreign patent abstracts are accessible with the workstation GUI. Wibberley: getting users to use your desktop products is difficult; particularly with long-time professional searchers who have already used the full functionality of older clients, the transition to a simpler, more intuitive interface does not happen. It takes time for people to learn the new client and become comfortable with it.
Donnelly: have you given any thought to opening up a test server to encourage development of a third-party client? Mazur: there are several reasons why we cannot do that. First, we can't offer free access because we sell access. What we do need is a stable GUI and a clearly defined way to construct the query that cannot be changed (because it is documented and people have been trained). If anything goes wrong here, the government is liable. If it turns out that a patent was allowed, and there was something like that already patented, then the government takes the blame.
Wibberley: was there any discussion at that meeting about Digital Object Identifiers (DOI) or resolution of copyright? C Lynch: no; no discussion of rights management. Denenberg: if there is some thinking about registering metadata relevant to Z39.50, should we suggest registering Explain or anything else?