WO2000045288A2 - Computerized translating apparatus - Google Patents

Computerized translating apparatus Download PDF

Info

Publication number
WO2000045288A2
WO2000045288A2 PCT/IL2000/000060 IL0000060W WO0045288A2 WO 2000045288 A2 WO2000045288 A2 WO 2000045288A2 IL 0000060 W IL0000060 W IL 0000060W WO 0045288 A2 WO0045288 A2 WO 0045288A2
Authority
WO
WIPO (PCT)
Prior art keywords
pictorial
images
articulation
articulations
database file
Prior art date
Application number
PCT/IL2000/000060
Other languages
French (fr)
Other versions
WO2000045288A3 (en
Inventor
Jacob Fromer
Original Assignee
Jacob Fromer
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jacob Fromer filed Critical Jacob Fromer
Priority to EP00901314A priority Critical patent/EP1149349A2/en
Priority to JP2000596476A priority patent/JP2002536720A/en
Priority to AU21270/00A priority patent/AU2127000A/en
Publication of WO2000045288A2 publication Critical patent/WO2000045288A2/en
Publication of WO2000045288A3 publication Critical patent/WO2000045288A3/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages

Definitions

  • the present invention generally relates to computerized translators.
  • computerized self-teaching apparatus there are presently available the kind known as "Electronic Dictionaries". These devices comprise a keyboard and a display screen. Words entered in one given language will be displayed on the screen translated into another language according to the programming of the device. More advanced apparatuses include an additional audio feature of producing, through suitable electronic circuitry and built-in speaker, the spoken sounds of the translated word.
  • dictionary apparatus comprising a first database file of words in a given
  • phonetic syllables means for the suitable selection of the source and target languages, means for inputting a selected word included in the first
  • Fig. 1 illustrates the general design of an apparatus featuring the
  • FIG. 2 is a block diagram of the apparatus sub-systems
  • Fig. 3 is a flow-chart of the apparatus operations
  • Fig. 4 is an example of use of the method as applied to a specific word
  • Fig. 5 illustrates the morphing process.
  • the proposed apparatus generally designated A is similar to any known electronic translator; it necessarily comprises keyboard B for entering words in the language of origin, suitable keys C for selecting indexed library databases, source language, etc.; and other functional control buttons D, e.g., target language selector; a first LQ screen E for displaying the entered as well as translated words; and, as required for the implementation of the present invention - a second LQ screen G on which the movements of the speech organs are displayed, as will be described in greater detail hereinbelow, and speaker F.
  • keyboard B for entering words in the language of origin
  • suitable keys C for selecting indexed library databases, source language, etc.
  • other functional control buttons D e.g., target language selector
  • a first LQ screen E for displaying the entered as well as translated words
  • a second LQ screen G on which the movements of the speech organs are displayed, as will be described in greater detail hereinbelow, and speaker F.
  • the audio function, represented by the speaker F is not essential to the implementation of the invention, although the incorporation thereof seems natural and preferable in the present context.
  • the apparatus comprises the following main software applications: User's interface 10, data processor 12, ROM memory 14 and output interface 16.
  • the user's interface comprises data input unit 20 for converting the data inputted by keyboard B into a textual form, possibly including spell checker function; and a database file selector 22 for interpreting commands entered by language selector buttons C and D (Fig. 1 ) to select source or target languages from vocabulary database files 24 of the ROM memory application 14.
  • the data processor 12 comprises data searching engine 26, applicable for searching database files according to index or file ID-field names, or for matching records of any two or more database files by index or common ID-fields, using data integrator 28.
  • the ROM memory application 14 comprises the above mentioned vocabulary database files 24. It contains a vocabulary of words in a given language available for selection by index or by alphabetical order; the index of each database is linked to other databases, to create mutual matching.
  • Word pronunciation database files 30 contains records of textual phonetic pronunciation, composed of basic syllables for every word included in the vocabulary.
  • Images of mouth articulation database files 32 contains records of sequential image sets of all basic phonetic syllables.
  • Key-point sets of mouth images database files 34 contain records of all key-points sets of mouth images, outlining the first and the last image of each syllable.
  • Image morphing generator 36 is also included for creating transient images, preferably by buffer 38 (see below).
  • the output interface application 16 is used for presenting the translation results to the user in a visual form and, optionally, also in audio form (speaker driver 40), and comprises text display driver 42 and mouth articulation display driver 44.
  • the first step of the user is to select out of all available languages programmed into the apparatus A the language that he/she determined as the source or origin language, and the desired target language.
  • the database file selector 22 selects the matching source vocabulary database file from database files 24.
  • Data input unit 20 is applied for inputting the words by keyboard B.
  • the selected word is displayed on screen E using text display driver 44, and is located by the data search engine 26 in the selected source language vocabulary database files 24.
  • the user selects the target language for the English translation. Let us further assume that the word "HELLO" has been retrieved as the translation of "BONJOUR".
  • Database file selector 22 selects the English language vocabulary database file from vocabulary database files 24, using the data integrator 28 for matching between source and the target vocabulary database files.
  • the translated word is being now matched by data integrator 28 with the corresponding textual pronunciation keys stored in database files 30 containing the basic phonetic syllables of which every word is composed.
  • the word "HELLO" is composed of two basic phonetic syllables: h and I , as represented in Fig. 4.
  • each one of the basic syllables is matched with the associated set of mouth articulation images, as stored in the mouth articulation database files 32 (schematically presented as a series of ellipses).
  • Fig. 1 the relevant sets of mouth articulation images, (employing visual display driver 42) a visual presentation of the word pronunciation is presented, concurrently with the audio production of the word. It is advisable that the duration of each image appearance will be controllable, and also that each syllable separately could be repeated as many times as desired.
  • mouth articulation image sets may not yield a streamlined pictorial presentation, but rather appear as a step-wise animation.
  • the apparatus is provided with an image morphing generator 36, whose function is to attain full, smooth and realistic demonstration of word pronunciation.
  • image morphing generator 36 which is known per-se and need not to be described in greater detail
  • transient or dynamically changing images are created. These transient images are dynamically and intermediately
  • the apparatus may comprise a database file of sequential image sets demonstrating the pronunciation of all complete word belonging to the selected language. This again will dictate an unreasonably large database and therefore disadvantageous compared with the other possibilities as described in detail hereinabove.

Abstract

A computerized dictionary/translating apparatus is disclosed, which comprises an additional screen (G) and suitable software for displaying, besides the textual form of the retrieved/translated word, also the mouth articulation of the word when verbally pronounced.

Description

COMPUTERIZED TRANSLATING APPARATUS BACKGROUND OF THE INVENTION
The present invention generally relates to computerized translators. In the art of computerized self-teaching apparatus there are presently available the kind known as "Electronic Dictionaries". These devices comprise a keyboard and a display screen. Words entered in one given language will be displayed on the screen translated into another language according to the programming of the device. More advanced apparatuses include an additional audio feature of producing, through suitable electronic circuitry and built-in speaker, the spoken sounds of the translated word.
This important assistance to users is insufficient, in the sense that it is still missing a guidance as how to exactly pronounce the word, namely the visual presentation of the mouth and related organs (lips and tongue). It is thus the object of the present invention to provide a computerized translator capable of displaying the actual pronunciation of the translated words.
It is a further object of the present invention to utilize for that purpose a database of basic phonetic syllables of which all words in the translated language are composed.
It is a still further object of the present invention to display the pronunciation of the translated words in a dynamic streamlined fashion, using the morphing technique. SUMMARY OF THE INVENTION
Thus provided according to the present invention is a computerized
dictionary apparatus, comprising a first database file of words in a given
source and target languages, and a second database file of textual
pronunciation keys of the basic phonetic syllables relating to the words of
the said first database file of words characterized by a third database file of
sequential pictorial mouth articulation images for each of the said basic
phonetic syllables, means for the suitable selection of the source and target languages, means for inputting a selected word included in the first
database file, means for displaying the selected word in a textual form,
means for locating the selected word in the first database file of words in a
given source language, means for selecting the sequential pictorial mouth
articulation images relating to the phonetic syllables of the textual key
pronunciations and means for displaying the sequential pictorial mouth
articulation images of the selected word in differential and combined
succession.
BRIEF DESCRIPTION OF THE DRAWINGS
These and additional objects, advantages and features of the present
invention will be more clearly understood in the light of the ensuing description
of preferred embodiments thereof, given by way of example only with reference
to the accompanying drawings, wherein-
Fig. 1 illustrates the general design of an apparatus featuring the
characteristics of the present invention; Fig. 2 is a block diagram of the apparatus sub-systems; Fig. 3 is a flow-chart of the apparatus operations; Fig. 4 is an example of use of the method as applied to a specific word; and Fig. 5 illustrates the morphing process.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS As schematically shown in Fig. 1 the proposed apparatus generally designated A is similar to any known electronic translator; it necessarily comprises keyboard B for entering words in the language of origin, suitable keys C for selecting indexed library databases, source language, etc.; and other functional control buttons D, e.g., target language selector; a first LQ screen E for displaying the entered as well as translated words; and, as required for the implementation of the present invention - a second LQ screen G on which the movements of the speech organs are displayed, as will be described in greater detail hereinbelow, and speaker F.
Obviously, the design of the apparatus may vary in many respects, as in fact there are many types of electronic dictionaries available on the market.
It should be further mentioned that the audio function, represented by the speaker F is not essential to the implementation of the invention, although the incorporation thereof seems natural and preferable in the present context.
As seen in Fig. 2, the apparatus comprises the following main software applications: User's interface 10, data processor 12, ROM memory 14 and output interface 16.
In more detail, the user's interface comprises data input unit 20 for converting the data inputted by keyboard B into a textual form, possibly including spell checker function; and a database file selector 22 for interpreting commands entered by language selector buttons C and D (Fig. 1 ) to select source or target languages from vocabulary database files 24 of the ROM memory application 14. The data processor 12 comprises data searching engine 26, applicable for searching database files according to index or file ID-field names, or for matching records of any two or more database files by index or common ID-fields, using data integrator 28.
The ROM memory application 14 comprises the above mentioned vocabulary database files 24. It contains a vocabulary of words in a given language available for selection by index or by alphabetical order; the index of each database is linked to other databases, to create mutual matching.
Word pronunciation database files 30 contains records of textual phonetic pronunciation, composed of basic syllables for every word included in the vocabulary.
Images of mouth articulation database files 32 contains records of sequential image sets of all basic phonetic syllables.
Key-point sets of mouth images database files 34 contain records of all key-points sets of mouth images, outlining the first and the last image of each syllable.
Image morphing generator 36 is also included for creating transient images, preferably by buffer 38 (see below).
The output interface application 16 is used for presenting the translation results to the user in a visual form and, optionally, also in audio form (speaker driver 40), and comprises text display driver 42 and mouth articulation display driver 44.
The operation of the apparatus according to the present invention is schematically represented in Fig. 3.
The first step of the user is to select out of all available languages programmed into the apparatus A the language that he/she determined as the source or origin language, and the desired target language. The database file selector 22 selects the matching source vocabulary database file from database files 24.
Let us assume that the source language chosen is French, and the word "BONJOUR" is requested to be translated into English.
Data input unit 20 is applied for inputting the words by keyboard B. The selected word is displayed on screen E using text display driver 44, and is located by the data search engine 26 in the selected source language vocabulary database files 24. The user selects the target language for the English translation. Let us further assume that the word "HELLO" has been retrieved as the translation of "BONJOUR". Database file selector 22 selects the English language vocabulary database file from vocabulary database files 24, using the data integrator 28 for matching between source and the target vocabulary database files. The translated word is being now matched by data integrator 28 with the corresponding textual pronunciation keys stored in database files 30 containing the basic phonetic syllables of which every word is composed. Thus, the word "HELLO" is composed of two basic phonetic syllables: h and I , as represented in Fig. 4.
As further illustrated in the flow-chart of Fig. 3, each one of the basic syllables is matched with the associated set of mouth articulation images, as stored in the mouth articulation database files 32 (schematically presented as a series of ellipses).
Now, by displaying on screen G (Fig. 1) the relevant sets of mouth articulation images, (employing visual display driver 42) a visual presentation of the word pronunciation is presented, concurrently with the audio production of the word. It is advisable that the duration of each image appearance will be controllable, and also that each syllable separately could be repeated as many times as desired.
It will, however, be noted that the sequential presentation of mouth articulation image sets may not yield a streamlined pictorial presentation, but rather appear as a step-wise animation.
To rectify that, and according to an additional feature of the present invention, the apparatus is provided with an image morphing generator 36, whose function is to attain full, smooth and realistic demonstration of word pronunciation. By this image morphing function (which is known per-se and need not to be described in greater detail), transient or dynamically changing images are created. These transient images are dynamically and intermediately
inserted in-between sets of two successive syllables, to achieve streamlined pictorial presentation.
The image morphing function enables to create sequential series of transient images, from the last image of the first syllable to the first image of the
second syllable.
A simple example of the morphing process application is schematically
depicted in Fig. 5. The new images are stored in buffer 38.
Fig. 5a schematically shows the last mouth image of the first syllable,
outlined by selected key-points A1 - A6, whereas points A1'" - A6'" in Fig. 5d
outline the last mouth image of the second (and last) syllable. The selected
key-points are retrieved from key-point sets of mouth images database file 34.
The product images of the morphing process are shown in Figs. 5b and
Fig 5c, namely being slightly shifted at every step to reach the position of the
target image in Fig. 5d.
The application of the morphing process is integrated into the flow-chart
of Fig. 3 (last two rows).
Using the visual display unit 42 to display on screen G the new transient
images in-between the syllable images, gives the user a vivid visual
presentation of the word pronunciation.
It will be readily understood that the apparatus may be designed to
comprise a ready-made database file of transient images matching all
possibilities of filling in-between any couple of basic syllables. This database file will replace the morphing function. However, this option will require an excessively large database file, and therefore rejected for practical reasons. According to yet another option, the apparatus may comprise a database file of sequential image sets demonstrating the pronunciation of all complete word belonging to the selected language. This again will dictate an unreasonably large database and therefore disadvantageous compared with the other possibilities as described in detail hereinabove.
While the above description contains many specificities, these should not be construed as limitations on the scope of the invention, but rather as exemplification of the preferred embodiments. Those skilled in the art will envision other possible variations that are within its scope. Accordingly, the scope of the invention should be determined not by the embodiment illustrated, but by the appended claims and their legal equivalents.

Claims

WHAT IS CLAIMED IS:
1. A computerized dictionary apparatus, comprising a first database file (24) of words in a given source and target languages, and a second database file (30) of textual pronunciation keys of the basic phonetic syllables relating to the words of the said first database file (24) of words c h a r a c t e r i z e d by-
- a third database file (32) of sequential pictorial mouth articulation images for each of the said basic phonetic syllables;
- means (22) for the suitable selection of the source and target languages;
- means (20) for inputting a selected word included in the first database file (24);
- means (44) for displaying the selected word in a textual form;
- means (26) for locating the selected word in the first database file (24) of words in a given source language;
- means (32) for selecting the sequential pictorial mouth articulation images relating to the phonetic syllables of the textual key pronunciations; and -means (42) for displaying the sequential pictorial mouth articulation images of the selected word in differential and combined succession.
2. The apparatus as claimed in Claim 1 , further c h a r a c t e r i z e d by-
- a forth database (34) of selected key-point sets outlining the said images for each first and last sequential pictorial articulations of the phonetic syllables;
- means (36) for creating the transient pictorial articulation images between two phonetic syllables using the key-point set of the last pictorial articulation of the first phonetic syllable and key-point set of the first pictorial articulation of the second phonetic syllable, and so forth; and -means for synchronically displaying the transient pictorial articulations in-between of the respective pictorial articulations of the phonetic syllables.
3. The apparatus as claimed in Claim 1 , further comprising means (40) for audio reproduction of said textual word pronunciation and said basic phonetic syllables, in synchronization with the visual displaying of said basic phonetic syllables involving a morphing images processing.
4. The apparatus as claimed in Claim 1 , wherein the inputting data means (20) comprise a keyboard (B).
5. The apparatus as claimed in Claim 1 , wherein inputting the selected word in the first source language becomes associated with corresponding word in the given second target language by means of data search engine (26).
6. The apparatus as claimed in Claim 1 , further c h a r a c t e r i z e d by -
- a database of transient pictorial articulations necessary for filling-in of the intervals between successive pictorial articulations of the basic phonetic syllables; - means for locating the transient pictorial articulations;
- means for synchronically displaying the transient pictorial articulations in-between of the respective pictorial mouth articulation images of phonetic syllables; and
- means for combining of the aforesaid pictorial mouth articulation images with corresponding audio files and their controllable simultaneous differential and/or combined synchronical playback.
7. The apparatus as claimed in Claims 14, further c h a r a c t e r i z e d by means for automatic repetitions of the differential and combined visual mouth articulation images and audio differential and combined playback of any inputted word from the first source language database file and /or second, third etc, target language database files.
PCT/IL2000/000060 1999-01-31 2000-01-30 Computerized translating apparatus WO2000045288A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP00901314A EP1149349A2 (en) 1999-01-31 2000-01-30 Computerized translating apparatus
JP2000596476A JP2002536720A (en) 1999-01-31 2000-01-30 Electronic translation device
AU21270/00A AU2127000A (en) 1999-01-31 2000-01-30 Computerized translating apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IL128295 1999-01-31
IL12829599A IL128295A (en) 1999-01-31 1999-01-31 Computerized translator displaying visual mouth articulation

Publications (2)

Publication Number Publication Date
WO2000045288A2 true WO2000045288A2 (en) 2000-08-03
WO2000045288A3 WO2000045288A3 (en) 2000-12-07

Family

ID=11072437

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2000/000060 WO2000045288A2 (en) 1999-01-31 2000-01-30 Computerized translating apparatus

Country Status (6)

Country Link
EP (1) EP1149349A2 (en)
JP (1) JP2002536720A (en)
CN (1) CN1339133A (en)
AU (1) AU2127000A (en)
IL (1) IL128295A (en)
WO (1) WO2000045288A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006108236A1 (en) * 2005-04-14 2006-10-19 Bryson Investments Pty Ltd Animation apparatus and method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4591481B2 (en) * 2007-07-27 2010-12-01 カシオ計算機株式会社 Display control apparatus and display control processing program
US20140006004A1 (en) * 2012-07-02 2014-01-02 Microsoft Corporation Generating localized user interfaces

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4884972A (en) * 1986-11-26 1989-12-05 Bright Star Technology, Inc. Speech synchronized animation
US5697789A (en) * 1994-11-22 1997-12-16 Softrade International, Inc. Method and system for aiding foreign language instruction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4884972A (en) * 1986-11-26 1989-12-05 Bright Star Technology, Inc. Speech synchronized animation
US5697789A (en) * 1994-11-22 1997-12-16 Softrade International, Inc. Method and system for aiding foreign language instruction

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006108236A1 (en) * 2005-04-14 2006-10-19 Bryson Investments Pty Ltd Animation apparatus and method

Also Published As

Publication number Publication date
IL128295A0 (en) 1999-11-30
WO2000045288A3 (en) 2000-12-07
AU2127000A (en) 2000-08-18
IL128295A (en) 2004-03-28
EP1149349A2 (en) 2001-10-31
JP2002536720A (en) 2002-10-29
CN1339133A (en) 2002-03-06

Similar Documents

Publication Publication Date Title
US7149690B2 (en) Method and apparatus for interactive language instruction
US6729882B2 (en) Phonetic instructional database computer device for teaching the sound patterns of English
US10088976B2 (en) Systems and methods for multiple voice document narration
US6324511B1 (en) Method of and apparatus for multi-modal information presentation to computer users with dyslexia, reading disabilities or visual impairment
US8498866B2 (en) Systems and methods for multiple language document narration
US6022222A (en) Icon language teaching system
JP4237915B2 (en) A method performed on a computer to allow a user to set the pronunciation of a string
US8346557B2 (en) Systems and methods document narration
US20060194181A1 (en) Method and apparatus for electronic books with enhanced educational features
JP4833313B2 (en) Chinese dialect judgment program
US20070255570A1 (en) Multi-platform visual pronunciation dictionary
CN112053595B (en) Computer-implemented training system
KR101102520B1 (en) The audio-visual learning system of its operating methods that based on hangul alphabet combining the metrics
KR100238451B1 (en) A computer aided education system and control techniques for korean
US20050137872A1 (en) System and method for voice synthesis using an annotation system
Fitria English Accent Variations of American English (Ame) and British English (Bre): An Implication in English Language Teaching
US20040102973A1 (en) Process, apparatus, and system for phonetic dictation and instruction
EP1149349A2 (en) Computerized translating apparatus
KR20030079497A (en) service method of language study
AU2012100262B4 (en) Speech visualisation tool
EP3959706A1 (en) Augmentative and alternative communication (acc) reading system
Sobkowiak Pronounciation In Macmillan English Dictionary For Advanced Learners On Cd-Rom
Meron et al. Improving the authoring of foreign language interactive lessons in the tactical language training system.
KR102645880B1 (en) Method and device for providing english self-directed learning contents
KR102112059B1 (en) Method for making hangul mark for chinese pronunciation on the basis of listening, and method for displaying the same, learning foreign language using the same

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 00803306.4

Country of ref document: CN

AK Designated states

Kind code of ref document: A2

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

ENP Entry into the national phase

Ref document number: 2000 596476

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 09890504

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2000901314

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2000901314

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWW Wipo information: withdrawn in national office

Ref document number: 2000901314

Country of ref document: EP