CA2479483A1 - Multi-layer focusing method and apparatus therefor - Google Patents
Multi-layer focusing method and apparatus therefor Download PDFInfo
- Publication number
- CA2479483A1 CA2479483A1 CA002479483A CA2479483A CA2479483A1 CA 2479483 A1 CA2479483 A1 CA 2479483A1 CA 002479483 A CA002479483 A CA 002479483A CA 2479483 A CA2479483 A CA 2479483A CA 2479483 A1 CA2479483 A1 CA 2479483A1
- Authority
- CA
- Canada
- Prior art keywords
- focusing
- contents
- layer
- user
- focusing layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
- G11B27/32—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier
- G11B27/327—Table of contents
- G11B27/329—Table of contents on a disc [VTOC]
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0489—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using dedicated keyboard keys or combinations thereof
- G06F3/04892—Arrangements for controlling cursor position based on codes indicative of cursor displacements from one discrete location to another, e.g. using cursor control keys associated to different directions or using the tab key
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
- G11B27/105—Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
- G11B27/30—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on the same track as the main recording
- G11B27/3027—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on the same track as the main recording used signal is digitally coded
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42204—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/443—OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
- H04N21/4438—Window management, e.g. event handling following interaction with the user interface
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
- H04N21/4825—End-user interface for program selection using a list of items to be played back in a given order, e.g. playlists
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/84—Television signal recording using optical recording
- H04N5/85—Television signal recording using optical recording on discs or drums
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B2220/00—Record carriers by type
- G11B2220/20—Disc-shaped record carriers
- G11B2220/21—Disc-shaped record carriers characterised in that the disc is of read-only, rewritable, or recordable type
- G11B2220/215—Recordable discs
- G11B2220/216—Rewritable discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B2220/00—Record carriers by type
- G11B2220/20—Disc-shaped record carriers
- G11B2220/25—Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
- G11B2220/2537—Optical discs
- G11B2220/2562—DVDs [digital versatile discs]; Digital video discs; MMCDs; HDCDs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B2220/00—Record carriers by type
- G11B2220/20—Disc-shaped record carriers
- G11B2220/25—Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
- G11B2220/2537—Optical discs
- G11B2220/2562—DVDs [digital versatile discs]; Digital video discs; MMCDs; HDCDs
- G11B2220/2575—DVD-RAMs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
Abstract
Provided are a method for efficiently navigating a plurality of contents which are reproduced in a device for reproducing interactive contents, for example, a computer, a DVD player, a PDA, or a cellular phone, by using a device having a limited number of input keys, and a method therefor. The provided multi-layer focusing method includes (a) allotting focusing layer values to contents elements to form focusing layers, (b) providing the contents elements included in any one focusing layer to a user, and (c) moving the focusing when a command to move the focusing is input from the user.
Description
MULTI-LAYER FOCUSING METHOD AND APPARATUS THEREFOR
Technical Field The present invention relates to a method and an apparatus for s efficiently performing a navigation process by using a user input device having a limited number of input keys when reproducing interactive contents formed by a markup language in a reproducing device, for example, a computer, a DVD player, a PDA, or a cellular phone.
to Background Art Since specific elements in a markup language document include operations to be performed, which are formed as tags, the elements should be selected by a user in order to perform the corresponding operations. Here, the selected state is referred to as a focus-on state.
is In the focus-on state, the element receives the commands from the user.
Examples of a focusing method, which corresponds to the selection of specific elements, to perform the operations allotted to the elements, are as follows.
First, the elements are set to the focus-on state using a pointing 2o device, such as a mouse or a joystick, based on location information. In other words, the user places a pointer on the element to be focused on and clicks a selection button, on a screen.
Second, a predetermined selection order is allotted to each element and the elements are successively focused on corresponding to as the values input from an input device, such as a keyboard. In order to determine the focusing order of the elements when navigating the document using the keyboard, a document producer may determine a tabbing order. Accordingly, the selected element can be activated using a tab key. Here, the tabbing order of the elements can be so determined by inputting numbers between 0 and 32767 into a tab index included in an attribution definition of a markup language. The markup elements which support the tab index attribution include "A", "AREA", "BUTTON", "INPUT", "OBJECT", "SELECT", and "TEXTAREA".
Third, access key values are allotted to each element to directly s activate or focus the element. Here, the key value is received from the user input device and the corresponding element is directly accessed and focused on.
According to an access key attribution scheme in the attribution definition of the markup language, the access key values are allotted to io the elements. Here, each of the access key values is denoted by each of the characters in a character set, thereby a document producer should consider the keys of the user input device when allotting the access key values to the elements. The markup elements which support the access key attribution include "A", "AREA", "BUTTON", "INPUT", Is "LEGEND", and "TEXTAREA".
However, the focusing method of the conventional interactive contents formed by the markup language has the following problems.
First, the elements included in an embedded object cannot be controlled.
Second, the keys or buttons of the user input unit have only one 2o function each.
In order to solve the first problem, a method of focusing on element using a pointing device, such as a mouse, as well as the keyboard, and clicking a mouse button, is used. In other words, even when video or audio, which is embedded into Windows Media Player or 2s Real Player, is reproduced, desired operations can be performed by focusing and clicking a play icon, a stop icon, or a pause icon using the mouse. Here, Windows Media Player or Real Player is a different medium from a viewer, which controls the markup using the "OBJECT"
element.
Technical Field The present invention relates to a method and an apparatus for s efficiently performing a navigation process by using a user input device having a limited number of input keys when reproducing interactive contents formed by a markup language in a reproducing device, for example, a computer, a DVD player, a PDA, or a cellular phone.
to Background Art Since specific elements in a markup language document include operations to be performed, which are formed as tags, the elements should be selected by a user in order to perform the corresponding operations. Here, the selected state is referred to as a focus-on state.
is In the focus-on state, the element receives the commands from the user.
Examples of a focusing method, which corresponds to the selection of specific elements, to perform the operations allotted to the elements, are as follows.
First, the elements are set to the focus-on state using a pointing 2o device, such as a mouse or a joystick, based on location information. In other words, the user places a pointer on the element to be focused on and clicks a selection button, on a screen.
Second, a predetermined selection order is allotted to each element and the elements are successively focused on corresponding to as the values input from an input device, such as a keyboard. In order to determine the focusing order of the elements when navigating the document using the keyboard, a document producer may determine a tabbing order. Accordingly, the selected element can be activated using a tab key. Here, the tabbing order of the elements can be so determined by inputting numbers between 0 and 32767 into a tab index included in an attribution definition of a markup language. The markup elements which support the tab index attribution include "A", "AREA", "BUTTON", "INPUT", "OBJECT", "SELECT", and "TEXTAREA".
Third, access key values are allotted to each element to directly s activate or focus the element. Here, the key value is received from the user input device and the corresponding element is directly accessed and focused on.
According to an access key attribution scheme in the attribution definition of the markup language, the access key values are allotted to io the elements. Here, each of the access key values is denoted by each of the characters in a character set, thereby a document producer should consider the keys of the user input device when allotting the access key values to the elements. The markup elements which support the access key attribution include "A", "AREA", "BUTTON", "INPUT", Is "LEGEND", and "TEXTAREA".
However, the focusing method of the conventional interactive contents formed by the markup language has the following problems.
First, the elements included in an embedded object cannot be controlled.
Second, the keys or buttons of the user input unit have only one 2o function each.
In order to solve the first problem, a method of focusing on element using a pointing device, such as a mouse, as well as the keyboard, and clicking a mouse button, is used. In other words, even when video or audio, which is embedded into Windows Media Player or 2s Real Player, is reproduced, desired operations can be performed by focusing and clicking a play icon, a stop icon, or a pause icon using the mouse. Here, Windows Media Player or Real Player is a different medium from a viewer, which controls the markup using the "OBJECT"
element.
In order to solve the second problem, the range to which the keys or the buttons having the predetermined functions are applied is provided to the user using a mufti-window. In other words, a media player window is activated on the markup language document in order to s prevent the user from being confused even when the keys or the buttons of the input device have various functions.
FIGS. 1 A through 1 C are examples of a process of navigating a menu in a conventional DVD-video. If a MENU button 104 of the user input device is pressed when conventional DVD-video is activated, a to menu selection screen 101, which is defined in a disc, is displayed while illustrating highlight information 102 on a selected item. Thus, the user may use a navigation direction key 103 shown in FIG. 1 B to select another item and show different highlight information 105 as shown in FIG. 1 C.
is FIGS. 2A and 2B are diagrams for explaining the conventional focusing method. As shown in FIG. 2A, in the interactive contents of the markup language document including a DVD reproduction screen, the DVD reproduction screen is embedded as the "OBJECT" element and links 1 through 6 may perform specific operations when they are 2o focused on by using the user input device. .
When it is assumed that moving the focusing among the links is performed using the navigation keys, the predetermined operation, such as reproduction, is performed by focusing the DVD reproduction image and pressing an ENTER key. Here, if reproduction is simultaneously 2s performed with the loading of the markup language document, the DVD-video requires only the focusing operation.
FIG. 2B illustrates an example of moving the focusing to another link within the markup language document by pressing the navigation keys for selecting the menu in the DVD reproduction screen. In other so words, as shown in FIG. 1 B, when navigating the menu on the DVD
FIGS. 1 A through 1 C are examples of a process of navigating a menu in a conventional DVD-video. If a MENU button 104 of the user input device is pressed when conventional DVD-video is activated, a to menu selection screen 101, which is defined in a disc, is displayed while illustrating highlight information 102 on a selected item. Thus, the user may use a navigation direction key 103 shown in FIG. 1 B to select another item and show different highlight information 105 as shown in FIG. 1 C.
is FIGS. 2A and 2B are diagrams for explaining the conventional focusing method. As shown in FIG. 2A, in the interactive contents of the markup language document including a DVD reproduction screen, the DVD reproduction screen is embedded as the "OBJECT" element and links 1 through 6 may perform specific operations when they are 2o focused on by using the user input device. .
When it is assumed that moving the focusing among the links is performed using the navigation keys, the predetermined operation, such as reproduction, is performed by focusing the DVD reproduction image and pressing an ENTER key. Here, if reproduction is simultaneously 2s performed with the loading of the markup language document, the DVD-video requires only the focusing operation.
FIG. 2B illustrates an example of moving the focusing to another link within the markup language document by pressing the navigation keys for selecting the menu in the DVD reproduction screen. In other so words, as shown in FIG. 1 B, when navigating the menu on the DVD
reproduction screen or the DVD-video, the same key may have to be used to perform the DVD navigation operation. In this case, since the navigation keys are used to move the focusing among the links, they cannot be used to select the items of the menu in the DVD reproduction s screen.
This problem is more serious in the case where the interactive contents are controlled using an input device having a limited number of keys, such as a remote controller.
io Disclosure of the Invention To solve the above-described problems, it is an objective of the present invention to provide a multi-layer focusing method and an apparatus therefor.
To meet the above objective, according to one aspect of the Is present invention, there is provided a multi-layer focusing method for focusing contents provided in a multi-layer structure comprising (a) allotting predetermined focusing layer values to contents elements linked to contents to be provided to a user, to form focusing layers; (b) providing the list information on the contents elements included in a Zo predetermined focusing layer and focusing on any one contents element of the contents elements provided to the user; (c) receiving a predetermined command from the user; and (d) moving the focusing when the command is to move the focusing. Here, act (d) comprises (d1 ) determining whether an upper focusing layer of a present focusing Zs layer exists in the case where the command is to change the .present focusing layer to the upper focusing layer, and (d2) when the upper focusing layer exists, providing the list information on contents elements included in the upper focusing layer to the user and focusing on any one of the contents element included in the upper focusing layer.
so Alternatively, act (d) comprises (d1 ) determining whether a lower focusing layer is linked to the focused on contents element in the case where the command is to change the focusing layer to the lower focusing layer, and (d2) when the lower focusing layer is linked to the focused on contents element, providing the list information on contents elements s included in the lower focusing layer to the user and focusing on any one contents element included in the lower focusing layer. Alternatively, act (d) comprises moving the focusing to a next contents element based on a predetermined order in the case where the command is to move the focusing within the same focusing layer.
io According to another aspect of the present invention, there is provided an apparatus for managing multi-layer focusing comprising a contents providing unit for providing contents which are linked to contents elements and are to be provided to a user; a focusing layer information management unit for allotting predetermined focusing layer is values to the contents elements which are linked to the contents to be provided to the user, to form focusing layers; an input unit for receiving commands from the user to move the focusing; an output unit for providing the contents to the user, providing predetermined contents elements, and representing focused on contents elements linked to the 2o contents; and a focusing management unit for providing the list information on the contents elements included in a specific focusing layer and the focusing information for focusing any one of the contents elements provided to the user to the output unit and moving the focusing in the case where the command is to move the focusing.
2s Brief Description of the Drawings FIG S. 1A through 1C are examples of a navigation process of a menu in a conventional DVD-video;
FIGS. 2A and 2B are diagrams for explaining a conventional 3o focusing method;
FIG. 3 illustrates a structure of an apparatus for multi-layer focusing according to the present invention;
FIG. 4 illustrates a multi-layered focusing structure according to the present invention;
s FIGS. 5A and 5B are examples of moving focusing within a top-focusing layer according to the present invention;
FIGS. 6A and 6B are examples of moving focusing from a top-focusing layer to a lower focusing layer according to the present invention;
to FIGS. 7A and 7B are examples of navigating each element in a DVD embedded mode where interactive contents are formed by a markup language;
FIG. 8 is a flowchart explaining a method for processing the operation of navigation keys in the case where an embedded DVD-video is is focused on;
FIGS. 9A through 9C are examples of processing the operation of navigation keys in the case where an embedded DVD-video is focused on;
FIGS. 10A and 10B are examples of changing focusing from an 2o embedded DVD-video to a markup language document;
FIG. 11 is a flowchart explaining a multi-layer focusing method according to the present invention; and FIG. 12 is a block diagram illustrating the structure of an apparatus for managing multi-layer focusing according to the present 2s invention.
Best mode for carrying out the Invention The present invention will now be described in more detail with reference to the accompanying drawing.
FIG. 3 is a diagram illustrating the structure of a multi-layer focusing apparatus according to the present invention. Here, a user input device 301, for example, a remote controller, receives commands related to focusing from a user. Reference numeral 302 denotes a s display device, which displays the contents selected by the user and the list information on the contents to be selected by the user. Examples of the display device 302 include a television and a computer monitor.
Reference numeral 303 denotes a recording medium for storing the contents. Here, the recording medium 303 stores the contents, io which will be provided to the user, and provides the contents to the user through the display device 302 based on the commands from the user.
Reference numeral 304 denotes a communication network, through which the contents can be provided to the user.
Reference numeral 305 manages focusing information based on is which contents in the contents list information are focused and manages the moving of the focusing based on the commands from the user.
FIG. 4 illustrates a multi-layered focusing structure according to the present invention, for example, a markup language document. Here, reference numeral 401 denotes the markup language document and 2o reference numeral 402 denotes a top-focusing layer. Reference numeral 403 denotes an element linked to a lower focusing layer, among the elements in the top-focusing layer 402. If the user inputs a command of selecting the element 403 and changing the focusing layer to the lower focusing layer, the focusing moves to the lower focusing Zs layer, which is linked to the element 403. Reference numeral 404 denotes a first lower focusing layer. Reference numeral 405 denotes an element linked to another lower focusing layer, among the elements in the first lower focusing layer 404. Reference numeral 406 denotes a second lower focusing layer to which the focusing is moved through the 3o element 405 of the first lower focusing layer 404.
In the case of the markup language document, the elements of the lower focusing layer do not have to be the markup language elements. In other words, the elements of the lower focusing layer can be a control menu for controlling the media embedded in the markup s language document or the contents input to an input form by the user.
FIGS. 5A and 5B illustrate a navigation method in the top-focusing layer of a markup language document including more than one lower focusing layer by using the navigation keys of the user input device according to the present invention. Here, examples of the io navigation method include moving the focusing according to a tabbing order of the markup language and moving the focusing to the desired element using the access key attribution of the markup language. Here, the focusing is represented by highlighting in the present invention.
Rectangles in an element 5 denote that the element 5 includes is lower focusing layers to be navigated. Here, an example of the embedded element 5 includes an "OBJECT" element of the markup language.
An element 1 501 is focused in FIG. 5A, and the element 5 502 is focused in FIG. 5B.
2o FIGS. 6A and 6B are examples of moving the focusing from the top-focusing layer to the lower focusing layer according to the present invention. In the case where a specific element, for example, the element 5, in the markup language document includes other elements to be navigated, other elements are referred to as the lower focusing layer, 2s which is different from the focusing layer including the presently focused element. When a key or a button for changing the focusing layers is input, the layer having the focused element is changed. When the navigation keys, for example, direction keys, are input, the focused elements are changed within a specific layer.
Meanwhile, the focused elements are represented using different colors according to the focusing layer in order to prevent the user from being confused when performing the navigation using the focusing layers.
s FIG. 6A is an example of moving the focusing from the top-focusing layer to the lower focusing layer and focusing an element 601 included in the lower focusing layer. FIG. 6B is an example of moving the focusing from the lower focusing layer to the top-focusing layer and focusing the element 5 602.
to FIGS. 7A and 7B are examples of navigating each element in a DVD embedded mode where the interactive contents are formed by the markup language.
When the multi-focusing method is applied to the DVD, the user may navigate the elements included in the top-focusing layer using the Is navigation keys of the user input device, on the DVD-video embedded screen, i.e., a screen in the markup language document.
In addition, since the DVD-video embedded in the markup language document is the "OBJECT" element of the markup language elements, the embedded DVD-video can be focused using the 2o navigation keys of the user input device. In other words, the embedded DVD-video is focused on as the "OBJECT" element of the markup language elements.
Reference numeral 701 denotes a first focused element of the markup language document, and reference numeral 702 denotes a 2s second focused element due to the input of the navigation key, such as a tab key or a direction key, by the user.
FIG. ~ is a flov~ichart explaining a method for processing the operation of navigation keys in the case where the embedded DVD-Video is focused on. When the embedded DVD-video as the so "OBJECT" element is focused on by manipulating the navigation keys, the elements which can be navigated using the navigation keys are changed according to the input of a focusing layer change key, for example, an ENTER key, of the user input device, because the DVD-video includes a lower focusing layer to be navigated.
s Referring to FIG. 8, it is determined whether the key value input from the user is a focusing change key value signaling change to the lower focusing layer in step 801. If the focusing change key value is input, focusing is navigated among the DVD-video elements and the focusing cannot move to the elements included in the markup language io document in step 803. If the focusing change key value is not input, focusing navigation is performed on the elements included in the markup language document in step 802.
FIGS. 9A through 9C are examples of processing the operation of the navigation keys in the case where the embedded DVD-Video is is focused on.
When the DVD-video as the "OBJECT" element is embedded in the markup language document as the top-focusing layer, the layer where the navigation keys of the user input device operate is changed from the markup language document to the DVD-video by inputting the 2o focusing layer change key value signaling change to the lower focusing layer, for example, the ENTER key. Here, since the DVD-video includes the lower focusing layer to be navigated, the layer can be changed from the markup language document to the DVD-video.
FIG. 9A illustrates the case where the embedded DVD-video is 2s focused on, which is represented as being highlighted. FIG. 9B
illustrates the case where the focusing layer change key, for example, the ENTER key, is not input and the focusing moves from the DVD-video to another element included in the same focusing layer as that of the DVD-video. Here, "link 1" element is focused on in FIG. 9B. FIG. 9C
3o illustrates the case where the focusing layer change key signaling change to the lower focusing layer, for example, the ENTER key, is input while the DVD-video is focused on as shown in FIG. 9A. In this case, the focusing cannot move to the other elements in the same focusing layer as that of the embedded DVD-video.
s FIGS. 10A and 10B are examples of changing the focusing from the embedded DVD-video to the markup language document.
When the user inputs the focusing layer change key signaling change to the lower focusing layer, for example, the ENTER key, using the user input device, the focusing moves from the markup language Io document as the top-focusing layer to the DVD-video as the lower focusing layer. Accordingly, the navigation keys of the user input device operate as in the case of the focusing method of the conventional interactive contents. In order to move the focusing from the DVD-video as the lower focusing layer to the markup language document as the is top-focusing layer, the user inputs a focusing layer change key signaling change to the upper focusing layer, for example, an ESC key, of the user input device. Accordingly, the focusing moves from the DVD-video as the lower focusing layer to the markup language document as the top-focusing layer. Therefore, the embedded "OBJECT" element is 2o focused on so that the embedded "OBJECT" element is highlighted.
FIG. 10A illustrates the case where "Gallery 1" element in the lower focusing layer is focused on. FIG. 10B illustrates the case where "MENU" element in the top-focusing layer is focused on by moving the focusing to the upper focusing layer.
2s FIG. 11 is a flowchart explaining the multi-layer focusing method according to the present 'invention. First, the contents elements included in a specific focusing layer are displayed and one of the displayed elements is focused on in step 1101. Thereafter, a predetermined command, i.e., a predetermined key value, is received 3o from the user and the received command is analyzed in step 1102. If the command is to change the focusing layer to the upper focusing layer, it is checked whether the upper focusing layer exists in step 1103.
When the upper focusing layer exists, the focusing is moved to the upper focusing layer in step 1104. If the command is to change the s focusing layer to the lower focusing layer, it is determined whether the presently focused element is linked to an element of the lower focusing layer in step 1105 in order to move the focusing to the lower focusing layer in step 1106. If the command is to move the focusing within the same focusing layer, the focusing moves to the contents element to according to a predetermined order, for example, the tabbing order, in step 1107. If the command is to play the contents, the corresponding contents are provided to the user in step 1103.
FIG. 12 is a block diagram illustrating the structure of the apparatus for managing multi-layer focusing according to the present is invention. The apparatus for managing multi-layer focusing according to the present invention includes an input unit 1201, a contents providing unit 1202, a focusing layer information management unit 1203, a focusing management unit 1204, and an output unit 1205.
The input unit 1201 receives the command to move the focusing 2o within the same focusing layer or to change the focusing layer, from the user.
The contents providing unit 1202 stores the contents which will be provided to the user. Here, when the contents are provided to the user over a communication network, such as the Internet, the contents 2s providing unit 1202 includes the communication network.
The focusing layer information management unit 1203 manage s the contents, which will be provided to the user, and the focusing layer information, which is allotted to each of the elements. Here, the focusing layer information of the elements, which can be includ so ed in the attribution information on the elements, may include the fo cusing layer information on the corresponding element, the focusing layer information on the upper focusing layer, and the focusing lay er information on the element linked to the corresponding element.
The focusing management unit 1204 represents the elements s linked to the contents which will be provided to the user, receives t he command about focusing from the user via the input unit 201, a nd receives the information on the focusing layer of the elements fr om the focusing layer information management unit 1203 in order to move the focusing. In addition, when the command to play specifi io c contents is input by the user, the focusing management unit 1204 receives the contents from the contents providing unit 1202 and pr ovides the contents to the user through the output unit 1205.
The focusing layer information, which is allotted to the contents and the elements linked to the contents, can be formed in the structure is shown in Table 1.
[Table 1 ]
focusing focusing linked layer existence of layer element in linked information lower element information the upper contents on the focusing upper on the focusing focusing layer element layer layer link DVD-video 1 none none yes Gallerymovie list2 1 link 1 yes music Gallery 2 1 link 1 yes video list The present invention can be realized as a program which can be run on a computer, and can be performed by the computer using a recording medium which can be read from the computer.
Here, the recording medium may be any kind of recording device in which data are recorded, for example, a magnetic recording medium s such as ROM, a floppy disk, or a hard disk, an optical medium such as CD-ROM or a DVD, and carrier waves such as transmission over the Internet.
The drawing and specification of the invention are provided for illustration only and are not used to limit the scope of the invention set to forth in appended claims.
Industrial Applicability As described above, according to the present invention, any kind of media having interactive contents formed by using markup language is can be navigated using a device having a limited number of input keys, such as a television remote controller.
This problem is more serious in the case where the interactive contents are controlled using an input device having a limited number of keys, such as a remote controller.
io Disclosure of the Invention To solve the above-described problems, it is an objective of the present invention to provide a multi-layer focusing method and an apparatus therefor.
To meet the above objective, according to one aspect of the Is present invention, there is provided a multi-layer focusing method for focusing contents provided in a multi-layer structure comprising (a) allotting predetermined focusing layer values to contents elements linked to contents to be provided to a user, to form focusing layers; (b) providing the list information on the contents elements included in a Zo predetermined focusing layer and focusing on any one contents element of the contents elements provided to the user; (c) receiving a predetermined command from the user; and (d) moving the focusing when the command is to move the focusing. Here, act (d) comprises (d1 ) determining whether an upper focusing layer of a present focusing Zs layer exists in the case where the command is to change the .present focusing layer to the upper focusing layer, and (d2) when the upper focusing layer exists, providing the list information on contents elements included in the upper focusing layer to the user and focusing on any one of the contents element included in the upper focusing layer.
so Alternatively, act (d) comprises (d1 ) determining whether a lower focusing layer is linked to the focused on contents element in the case where the command is to change the focusing layer to the lower focusing layer, and (d2) when the lower focusing layer is linked to the focused on contents element, providing the list information on contents elements s included in the lower focusing layer to the user and focusing on any one contents element included in the lower focusing layer. Alternatively, act (d) comprises moving the focusing to a next contents element based on a predetermined order in the case where the command is to move the focusing within the same focusing layer.
io According to another aspect of the present invention, there is provided an apparatus for managing multi-layer focusing comprising a contents providing unit for providing contents which are linked to contents elements and are to be provided to a user; a focusing layer information management unit for allotting predetermined focusing layer is values to the contents elements which are linked to the contents to be provided to the user, to form focusing layers; an input unit for receiving commands from the user to move the focusing; an output unit for providing the contents to the user, providing predetermined contents elements, and representing focused on contents elements linked to the 2o contents; and a focusing management unit for providing the list information on the contents elements included in a specific focusing layer and the focusing information for focusing any one of the contents elements provided to the user to the output unit and moving the focusing in the case where the command is to move the focusing.
2s Brief Description of the Drawings FIG S. 1A through 1C are examples of a navigation process of a menu in a conventional DVD-video;
FIGS. 2A and 2B are diagrams for explaining a conventional 3o focusing method;
FIG. 3 illustrates a structure of an apparatus for multi-layer focusing according to the present invention;
FIG. 4 illustrates a multi-layered focusing structure according to the present invention;
s FIGS. 5A and 5B are examples of moving focusing within a top-focusing layer according to the present invention;
FIGS. 6A and 6B are examples of moving focusing from a top-focusing layer to a lower focusing layer according to the present invention;
to FIGS. 7A and 7B are examples of navigating each element in a DVD embedded mode where interactive contents are formed by a markup language;
FIG. 8 is a flowchart explaining a method for processing the operation of navigation keys in the case where an embedded DVD-video is is focused on;
FIGS. 9A through 9C are examples of processing the operation of navigation keys in the case where an embedded DVD-video is focused on;
FIGS. 10A and 10B are examples of changing focusing from an 2o embedded DVD-video to a markup language document;
FIG. 11 is a flowchart explaining a multi-layer focusing method according to the present invention; and FIG. 12 is a block diagram illustrating the structure of an apparatus for managing multi-layer focusing according to the present 2s invention.
Best mode for carrying out the Invention The present invention will now be described in more detail with reference to the accompanying drawing.
FIG. 3 is a diagram illustrating the structure of a multi-layer focusing apparatus according to the present invention. Here, a user input device 301, for example, a remote controller, receives commands related to focusing from a user. Reference numeral 302 denotes a s display device, which displays the contents selected by the user and the list information on the contents to be selected by the user. Examples of the display device 302 include a television and a computer monitor.
Reference numeral 303 denotes a recording medium for storing the contents. Here, the recording medium 303 stores the contents, io which will be provided to the user, and provides the contents to the user through the display device 302 based on the commands from the user.
Reference numeral 304 denotes a communication network, through which the contents can be provided to the user.
Reference numeral 305 manages focusing information based on is which contents in the contents list information are focused and manages the moving of the focusing based on the commands from the user.
FIG. 4 illustrates a multi-layered focusing structure according to the present invention, for example, a markup language document. Here, reference numeral 401 denotes the markup language document and 2o reference numeral 402 denotes a top-focusing layer. Reference numeral 403 denotes an element linked to a lower focusing layer, among the elements in the top-focusing layer 402. If the user inputs a command of selecting the element 403 and changing the focusing layer to the lower focusing layer, the focusing moves to the lower focusing Zs layer, which is linked to the element 403. Reference numeral 404 denotes a first lower focusing layer. Reference numeral 405 denotes an element linked to another lower focusing layer, among the elements in the first lower focusing layer 404. Reference numeral 406 denotes a second lower focusing layer to which the focusing is moved through the 3o element 405 of the first lower focusing layer 404.
In the case of the markup language document, the elements of the lower focusing layer do not have to be the markup language elements. In other words, the elements of the lower focusing layer can be a control menu for controlling the media embedded in the markup s language document or the contents input to an input form by the user.
FIGS. 5A and 5B illustrate a navigation method in the top-focusing layer of a markup language document including more than one lower focusing layer by using the navigation keys of the user input device according to the present invention. Here, examples of the io navigation method include moving the focusing according to a tabbing order of the markup language and moving the focusing to the desired element using the access key attribution of the markup language. Here, the focusing is represented by highlighting in the present invention.
Rectangles in an element 5 denote that the element 5 includes is lower focusing layers to be navigated. Here, an example of the embedded element 5 includes an "OBJECT" element of the markup language.
An element 1 501 is focused in FIG. 5A, and the element 5 502 is focused in FIG. 5B.
2o FIGS. 6A and 6B are examples of moving the focusing from the top-focusing layer to the lower focusing layer according to the present invention. In the case where a specific element, for example, the element 5, in the markup language document includes other elements to be navigated, other elements are referred to as the lower focusing layer, 2s which is different from the focusing layer including the presently focused element. When a key or a button for changing the focusing layers is input, the layer having the focused element is changed. When the navigation keys, for example, direction keys, are input, the focused elements are changed within a specific layer.
Meanwhile, the focused elements are represented using different colors according to the focusing layer in order to prevent the user from being confused when performing the navigation using the focusing layers.
s FIG. 6A is an example of moving the focusing from the top-focusing layer to the lower focusing layer and focusing an element 601 included in the lower focusing layer. FIG. 6B is an example of moving the focusing from the lower focusing layer to the top-focusing layer and focusing the element 5 602.
to FIGS. 7A and 7B are examples of navigating each element in a DVD embedded mode where the interactive contents are formed by the markup language.
When the multi-focusing method is applied to the DVD, the user may navigate the elements included in the top-focusing layer using the Is navigation keys of the user input device, on the DVD-video embedded screen, i.e., a screen in the markup language document.
In addition, since the DVD-video embedded in the markup language document is the "OBJECT" element of the markup language elements, the embedded DVD-video can be focused using the 2o navigation keys of the user input device. In other words, the embedded DVD-video is focused on as the "OBJECT" element of the markup language elements.
Reference numeral 701 denotes a first focused element of the markup language document, and reference numeral 702 denotes a 2s second focused element due to the input of the navigation key, such as a tab key or a direction key, by the user.
FIG. ~ is a flov~ichart explaining a method for processing the operation of navigation keys in the case where the embedded DVD-Video is focused on. When the embedded DVD-video as the so "OBJECT" element is focused on by manipulating the navigation keys, the elements which can be navigated using the navigation keys are changed according to the input of a focusing layer change key, for example, an ENTER key, of the user input device, because the DVD-video includes a lower focusing layer to be navigated.
s Referring to FIG. 8, it is determined whether the key value input from the user is a focusing change key value signaling change to the lower focusing layer in step 801. If the focusing change key value is input, focusing is navigated among the DVD-video elements and the focusing cannot move to the elements included in the markup language io document in step 803. If the focusing change key value is not input, focusing navigation is performed on the elements included in the markup language document in step 802.
FIGS. 9A through 9C are examples of processing the operation of the navigation keys in the case where the embedded DVD-Video is is focused on.
When the DVD-video as the "OBJECT" element is embedded in the markup language document as the top-focusing layer, the layer where the navigation keys of the user input device operate is changed from the markup language document to the DVD-video by inputting the 2o focusing layer change key value signaling change to the lower focusing layer, for example, the ENTER key. Here, since the DVD-video includes the lower focusing layer to be navigated, the layer can be changed from the markup language document to the DVD-video.
FIG. 9A illustrates the case where the embedded DVD-video is 2s focused on, which is represented as being highlighted. FIG. 9B
illustrates the case where the focusing layer change key, for example, the ENTER key, is not input and the focusing moves from the DVD-video to another element included in the same focusing layer as that of the DVD-video. Here, "link 1" element is focused on in FIG. 9B. FIG. 9C
3o illustrates the case where the focusing layer change key signaling change to the lower focusing layer, for example, the ENTER key, is input while the DVD-video is focused on as shown in FIG. 9A. In this case, the focusing cannot move to the other elements in the same focusing layer as that of the embedded DVD-video.
s FIGS. 10A and 10B are examples of changing the focusing from the embedded DVD-video to the markup language document.
When the user inputs the focusing layer change key signaling change to the lower focusing layer, for example, the ENTER key, using the user input device, the focusing moves from the markup language Io document as the top-focusing layer to the DVD-video as the lower focusing layer. Accordingly, the navigation keys of the user input device operate as in the case of the focusing method of the conventional interactive contents. In order to move the focusing from the DVD-video as the lower focusing layer to the markup language document as the is top-focusing layer, the user inputs a focusing layer change key signaling change to the upper focusing layer, for example, an ESC key, of the user input device. Accordingly, the focusing moves from the DVD-video as the lower focusing layer to the markup language document as the top-focusing layer. Therefore, the embedded "OBJECT" element is 2o focused on so that the embedded "OBJECT" element is highlighted.
FIG. 10A illustrates the case where "Gallery 1" element in the lower focusing layer is focused on. FIG. 10B illustrates the case where "MENU" element in the top-focusing layer is focused on by moving the focusing to the upper focusing layer.
2s FIG. 11 is a flowchart explaining the multi-layer focusing method according to the present 'invention. First, the contents elements included in a specific focusing layer are displayed and one of the displayed elements is focused on in step 1101. Thereafter, a predetermined command, i.e., a predetermined key value, is received 3o from the user and the received command is analyzed in step 1102. If the command is to change the focusing layer to the upper focusing layer, it is checked whether the upper focusing layer exists in step 1103.
When the upper focusing layer exists, the focusing is moved to the upper focusing layer in step 1104. If the command is to change the s focusing layer to the lower focusing layer, it is determined whether the presently focused element is linked to an element of the lower focusing layer in step 1105 in order to move the focusing to the lower focusing layer in step 1106. If the command is to move the focusing within the same focusing layer, the focusing moves to the contents element to according to a predetermined order, for example, the tabbing order, in step 1107. If the command is to play the contents, the corresponding contents are provided to the user in step 1103.
FIG. 12 is a block diagram illustrating the structure of the apparatus for managing multi-layer focusing according to the present is invention. The apparatus for managing multi-layer focusing according to the present invention includes an input unit 1201, a contents providing unit 1202, a focusing layer information management unit 1203, a focusing management unit 1204, and an output unit 1205.
The input unit 1201 receives the command to move the focusing 2o within the same focusing layer or to change the focusing layer, from the user.
The contents providing unit 1202 stores the contents which will be provided to the user. Here, when the contents are provided to the user over a communication network, such as the Internet, the contents 2s providing unit 1202 includes the communication network.
The focusing layer information management unit 1203 manage s the contents, which will be provided to the user, and the focusing layer information, which is allotted to each of the elements. Here, the focusing layer information of the elements, which can be includ so ed in the attribution information on the elements, may include the fo cusing layer information on the corresponding element, the focusing layer information on the upper focusing layer, and the focusing lay er information on the element linked to the corresponding element.
The focusing management unit 1204 represents the elements s linked to the contents which will be provided to the user, receives t he command about focusing from the user via the input unit 201, a nd receives the information on the focusing layer of the elements fr om the focusing layer information management unit 1203 in order to move the focusing. In addition, when the command to play specifi io c contents is input by the user, the focusing management unit 1204 receives the contents from the contents providing unit 1202 and pr ovides the contents to the user through the output unit 1205.
The focusing layer information, which is allotted to the contents and the elements linked to the contents, can be formed in the structure is shown in Table 1.
[Table 1 ]
focusing focusing linked layer existence of layer element in linked information lower element information the upper contents on the focusing upper on the focusing focusing layer element layer layer link DVD-video 1 none none yes Gallerymovie list2 1 link 1 yes music Gallery 2 1 link 1 yes video list The present invention can be realized as a program which can be run on a computer, and can be performed by the computer using a recording medium which can be read from the computer.
Here, the recording medium may be any kind of recording device in which data are recorded, for example, a magnetic recording medium s such as ROM, a floppy disk, or a hard disk, an optical medium such as CD-ROM or a DVD, and carrier waves such as transmission over the Internet.
The drawing and specification of the invention are provided for illustration only and are not used to limit the scope of the invention set to forth in appended claims.
Industrial Applicability As described above, according to the present invention, any kind of media having interactive contents formed by using markup language is can be navigated using a device having a limited number of input keys, such as a television remote controller.
Claims (7)
1. A multi-layer focusing method for focusing contents provided in a multi-layer structure, the method comprising:
(a) allotting predetermined focusing layer values to contents elements linked to contents to be provided to a user, to form focusing layers;
(b) providing the contents elements included in a predetermined focusing layer and focusing on any one contents element of the contents elements provided to the user;
(c) receiving a predetermined command from the user; and (d) moving the focusing when the command is to move the focusing.
(a) allotting predetermined focusing layer values to contents elements linked to contents to be provided to a user, to form focusing layers;
(b) providing the contents elements included in a predetermined focusing layer and focusing on any one contents element of the contents elements provided to the user;
(c) receiving a predetermined command from the user; and (d) moving the focusing when the command is to move the focusing.
2. The method of claim 1, wherein act (d) comprises:
(d1) determining whether an upper focusing layer of a present focusing layer exists in the case where the command is to change the present focusing layer to the upper focusing layer; and (d2) when the upper focusing layer exists, providing contents elements included in the upper focusing layer to the user and focusing on any one of the contents element included in the upper focusing layer.
(d1) determining whether an upper focusing layer of a present focusing layer exists in the case where the command is to change the present focusing layer to the upper focusing layer; and (d2) when the upper focusing layer exists, providing contents elements included in the upper focusing layer to the user and focusing on any one of the contents element included in the upper focusing layer.
3. The method of claim 1, wherein act (d) comprises:
(d1) determining whether a lower focusing layer is linked to the focused on contents element in the case where the command is to change the focusing layer to the lower focusing layer; and (d2) when the lower focusing layer is linked to the focused on contents element, providing contents elements included in the lower focusing layer to the user and focusing on any one contents element included in the lower focusing layer.
(d1) determining whether a lower focusing layer is linked to the focused on contents element in the case where the command is to change the focusing layer to the lower focusing layer; and (d2) when the lower focusing layer is linked to the focused on contents element, providing contents elements included in the lower focusing layer to the user and focusing on any one contents element included in the lower focusing layer.
4. The method of claim 1, wherein act (d) comprises moving the focusing to a next contents element based on a predetermined order in the case where the command is to move the focusing within the same focusing layer.
5. An apparatus for managing multi-layer focusing, the apparatus comprising:
a contents providing unit for providing contents which are linked to contents elements and are to be provided to a user;
a focusing layer information management unit for allotting predetermined focusing layer values to the contents elements which are linked to the contents to be provided to the user, to form focusing layers;
an input unit for receiving commands from the user to move the focusing;
an output unit for providing the contents to the user, providing predetermined contents elements, and representing focused on contents elements linked to the contents; and a focusing management unit for providing the contents elements included in a specific focusing layer and the presently focused on contents element to the output unit and moving the focusing in the case where the command is to move the focusing.
a contents providing unit for providing contents which are linked to contents elements and are to be provided to a user;
a focusing layer information management unit for allotting predetermined focusing layer values to the contents elements which are linked to the contents to be provided to the user, to form focusing layers;
an input unit for receiving commands from the user to move the focusing;
an output unit for providing the contents to the user, providing predetermined contents elements, and representing focused on contents elements linked to the contents; and a focusing management unit for providing the contents elements included in a specific focusing layer and the presently focused on contents element to the output unit and moving the focusing in the case where the command is to move the focusing.
6. The apparatus of claim 5, wherein the input unit is an input device including predetermined buttons or keys.
7. A recording medium on which a program for performing any one method of claims 1 through 4 in a computer is recorded, wherein the recording medium is read from by the computer.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020020014275A KR100833229B1 (en) | 2002-03-16 | 2002-03-16 | Multi-layer focusing method and apparatus therefor |
KR10-2002-0014275 | 2002-03-16 | ||
PCT/KR2003/000116 WO2003079350A1 (en) | 2002-03-16 | 2003-01-18 | Multi-layer focusing method and apparatus therefor |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2479483A1 true CA2479483A1 (en) | 2003-09-25 |
Family
ID=28036086
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002479483A Abandoned CA2479483A1 (en) | 2002-03-16 | 2003-01-18 | Multi-layer focusing method and apparatus therefor |
Country Status (12)
Country | Link |
---|---|
US (2) | US7873914B2 (en) |
EP (1) | EP1485914A4 (en) |
JP (1) | JP4101767B2 (en) |
KR (1) | KR100833229B1 (en) |
CN (2) | CN101566915A (en) |
AU (1) | AU2003206156A1 (en) |
CA (1) | CA2479483A1 (en) |
MX (1) | MXPA04008945A (en) |
MY (1) | MY134823A (en) |
RU (1) | RU2316827C2 (en) |
TW (1) | TW591535B (en) |
WO (1) | WO2003079350A1 (en) |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030222898A1 (en) * | 2002-06-03 | 2003-12-04 | International Business Machines Corporation | Integrated wizard user interface |
US7134089B2 (en) * | 2002-11-13 | 2006-11-07 | Microsoft Corporation | Directional focus navigation |
US20050015730A1 (en) * | 2003-07-14 | 2005-01-20 | Srimanth Gunturi | Systems, methods and computer program products for identifying tab order sequence of graphically represented elements |
US7719542B1 (en) | 2003-10-10 | 2010-05-18 | Adobe Systems Incorporated | System, method and user interface controls for communicating status information |
KR100694123B1 (en) * | 2004-07-30 | 2007-03-12 | 삼성전자주식회사 | Storage medium including audio-visual data and application programs, apparatus and method thereof |
US7802186B2 (en) * | 2004-10-06 | 2010-09-21 | Microsoft Corporation | Property independent in-place editing |
US7631278B2 (en) | 2004-11-19 | 2009-12-08 | Microsoft Corporation | System and method for directional focus navigation |
US7636897B2 (en) * | 2004-11-19 | 2009-12-22 | Microsoft Corporation | System and method for property-based focus navigation in a user interface |
US8539374B2 (en) * | 2005-09-23 | 2013-09-17 | Disney Enterprises, Inc. | Graphical user interface for electronic devices |
US20090044219A1 (en) * | 2005-09-28 | 2009-02-12 | Norifumi Katou | Device control method using an operation screen, and electronic device and system using the method |
US7509588B2 (en) | 2005-12-30 | 2009-03-24 | Apple Inc. | Portable electronic device with interface reconfiguration mode |
US10313505B2 (en) | 2006-09-06 | 2019-06-04 | Apple Inc. | Portable multifunction device, method, and graphical user interface for configuring and displaying widgets |
US8519964B2 (en) | 2007-01-07 | 2013-08-27 | Apple Inc. | Portable multifunction device, method, and graphical user interface supporting user navigations of graphical objects on a touch screen display |
US8619038B2 (en) | 2007-09-04 | 2013-12-31 | Apple Inc. | Editing interface |
JP5226588B2 (en) * | 2008-04-14 | 2013-07-03 | キヤノン株式会社 | Information processing apparatus and control method thereof |
US10007393B2 (en) * | 2010-01-19 | 2018-06-26 | Apple Inc. | 3D view of file structure |
US9170708B2 (en) | 2010-04-07 | 2015-10-27 | Apple Inc. | Device, method, and graphical user interface for managing folders |
US10788976B2 (en) | 2010-04-07 | 2020-09-29 | Apple Inc. | Device, method, and graphical user interface for managing folders with multiple pages |
JP4922446B2 (en) * | 2010-09-13 | 2012-04-25 | 株式会社東芝 | Electronic device, control method of electronic device |
JP2012128662A (en) * | 2010-12-15 | 2012-07-05 | Samsung Electronics Co Ltd | Display control device, program and display control method |
US20130167016A1 (en) * | 2011-12-21 | 2013-06-27 | The Boeing Company | Panoptic Visualization Document Layout |
EP2704003A1 (en) * | 2012-08-30 | 2014-03-05 | Siemens Aktiengesellschaft | System for designing or setting up a technical apparatus |
CN105849675B (en) | 2013-10-30 | 2019-09-24 | 苹果公司 | Show relevant user interface object |
CN103605203B (en) * | 2013-11-07 | 2017-02-22 | 麦克奥迪实业集团有限公司 | Automatic focusing method in digital slicing scanning process |
US20150242377A1 (en) * | 2014-02-24 | 2015-08-27 | Autodesk, Inc. | Logical structure-based document navigation |
USD762231S1 (en) * | 2014-06-27 | 2016-07-26 | Bold Limited | Display screen or portion thereof with graphical user |
USD868797S1 (en) * | 2015-04-24 | 2019-12-03 | Amazon Technologies, Inc. | Display screen having a graphical user interface for product substitution |
US11301422B2 (en) * | 2016-02-23 | 2022-04-12 | Samsung Electronics Co., Ltd. | System and methods for providing fast cacheable access to a key-value device through a filesystem interface |
USD826245S1 (en) * | 2016-06-03 | 2018-08-21 | Visa International Service Association | Display screen with graphical user interface |
DK201670595A1 (en) | 2016-06-11 | 2018-01-22 | Apple Inc | Configuring context-specific user interfaces |
US11816325B2 (en) | 2016-06-12 | 2023-11-14 | Apple Inc. | Application shortcuts for carplay |
US10620910B2 (en) | 2016-12-23 | 2020-04-14 | Realwear, Inc. | Hands-free navigation of touch-based operating systems |
US11507216B2 (en) | 2016-12-23 | 2022-11-22 | Realwear, Inc. | Customizing user interfaces of binary applications |
US11099716B2 (en) | 2016-12-23 | 2021-08-24 | Realwear, Inc. | Context based content navigation for wearable display |
US10936872B2 (en) | 2016-12-23 | 2021-03-02 | Realwear, Inc. | Hands-free contextually aware object interaction for wearable display |
US11675476B2 (en) | 2019-05-05 | 2023-06-13 | Apple Inc. | User interfaces for widgets |
Family Cites Families (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5544354A (en) * | 1994-07-18 | 1996-08-06 | Ikonic Interactive, Inc. | Multimedia matrix architecture user interface |
JP3813661B2 (en) | 1995-05-29 | 2006-08-23 | 松下電器産業株式会社 | Menu display device |
US6388714B1 (en) * | 1995-10-02 | 2002-05-14 | Starsight Telecast Inc | Interactive computer system for providing television schedule information |
US6075575A (en) * | 1995-10-02 | 2000-06-13 | Starsight Telecast, Inc. | Remote control device and method for using television schedule information |
EP2282541A3 (en) | 1995-10-02 | 2012-10-03 | Starsight Telecast, Inc. | Systems and methods for providing television schedule information |
JPH09149329A (en) | 1995-11-20 | 1997-06-06 | Hitachi Ltd | Picture information signal receiving system |
US5751369A (en) * | 1996-05-02 | 1998-05-12 | Harrison; Robert G. | Information retrieval and presentation systems with direct access to retrievable items of information |
JP3793975B2 (en) | 1996-05-20 | 2006-07-05 | ソニー株式会社 | Registration method of customized menu in hierarchical menu and video equipment provided with customized menu |
JPH1021036A (en) * | 1996-07-08 | 1998-01-23 | Hitachi Ltd | Interactive video input and output system |
JP3954653B2 (en) * | 1996-08-28 | 2007-08-08 | 松下電器産業株式会社 | BROADCAST RECEIVING APPARATUS, BROADCAST RECEIVING METHOD, AND RECORDING MEDIUM CONTAINING THE METHOD USING SELECTIVE NAVIGATION INFORMATION SELECTED WITH TRANSPORT STREAM |
JPH10136314A (en) * | 1996-10-31 | 1998-05-22 | Hitachi Ltd | Data storage method for storage medium and interactive video reproducing device |
JPH10290432A (en) * | 1997-04-14 | 1998-10-27 | Matsushita Electric Ind Co Ltd | Information supply medium, information processor utilizing the same and information supply system |
EP1015962B2 (en) * | 1997-06-25 | 2006-11-02 | Samsung Electronics Co., Ltd. | Method for creating home network macros |
US5990890A (en) * | 1997-08-25 | 1999-11-23 | Liberate Technologies | System for data entry and navigation in a user interface |
US6154205A (en) * | 1998-03-25 | 2000-11-28 | Microsoft Corporation | Navigating web-based content in a television-based system |
EP0988752B1 (en) * | 1998-04-08 | 2006-08-09 | Koninklijke Philips Electronics N.V. | A tv receiver with an electronic program guide (epg) |
US20020056098A1 (en) * | 1998-06-29 | 2002-05-09 | Christopher M. White | Web browser system for displaying recently viewed television channels |
US6614457B1 (en) * | 1998-10-27 | 2003-09-02 | Matsushita Electric Industrial Co., Ltd. | Focus control device that moves a focus in a GUI screen |
US6452609B1 (en) * | 1998-11-06 | 2002-09-17 | Supertuner.Com | Web application for accessing media streams |
US6590594B2 (en) * | 1999-03-25 | 2003-07-08 | International Business Machines Corporation | Window scroll-bar |
US7346920B2 (en) * | 2000-07-07 | 2008-03-18 | Sonic Solutions, A California Corporation | System, method and article of manufacture for a common cross platform framework for development of DVD-Video content integrated with ROM content |
JP3551112B2 (en) * | 2000-01-20 | 2004-08-04 | 日本電気株式会社 | Multimedia scenario editing apparatus and recording medium recording multimedia scenario editing program |
WO2001061508A1 (en) * | 2000-02-17 | 2001-08-23 | Digimarc Corporation | Watermark encoder and decoder enabled software and devices |
US20020112237A1 (en) * | 2000-04-10 | 2002-08-15 | Kelts Brett R. | System and method for providing an interactive display interface for information objects |
JP3467262B2 (en) * | 2000-11-10 | 2003-11-17 | 株式会社ソニー・コンピュータエンタテインメント | Entertainment device and receiving device |
US6918090B2 (en) * | 2002-01-23 | 2005-07-12 | International Business Machines Corporation | Dynamic setting of navigation order in aggregated content |
US7197717B2 (en) * | 2002-06-04 | 2007-03-27 | Microsoft Corporation | Seamless tabbed focus control in active content |
KR100866790B1 (en) * | 2002-06-29 | 2008-11-04 | 삼성전자주식회사 | Method and apparatus for moving focus for navigation in interactive mode |
KR20040045101A (en) * | 2002-11-22 | 2004-06-01 | 삼성전자주식회사 | Method for focusing input item on object picture embedded in markup picture and information storage medium therefor |
-
2002
- 2002-03-16 KR KR1020020014275A patent/KR100833229B1/en not_active IP Right Cessation
-
2003
- 2003-01-18 CN CNA2009101338902A patent/CN101566915A/en active Pending
- 2003-01-18 AU AU2003206156A patent/AU2003206156A1/en not_active Abandoned
- 2003-01-18 JP JP2003577260A patent/JP4101767B2/en not_active Expired - Fee Related
- 2003-01-18 CA CA002479483A patent/CA2479483A1/en not_active Abandoned
- 2003-01-18 CN CNA038062291A patent/CN1643595A/en active Pending
- 2003-01-18 RU RU2004127674/28A patent/RU2316827C2/en not_active IP Right Cessation
- 2003-01-18 MX MXPA04008945A patent/MXPA04008945A/en active IP Right Grant
- 2003-01-18 EP EP03703391A patent/EP1485914A4/en not_active Withdrawn
- 2003-01-18 WO PCT/KR2003/000116 patent/WO2003079350A1/en active Application Filing
- 2003-01-22 MY MYPI20030218A patent/MY134823A/en unknown
- 2003-01-22 TW TW092101331A patent/TW591535B/en not_active IP Right Cessation
- 2003-03-12 US US10/385,464 patent/US7873914B2/en not_active Expired - Fee Related
-
2007
- 2007-03-30 US US11/730,295 patent/US20070174779A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
JP2005521141A (en) | 2005-07-14 |
EP1485914A1 (en) | 2004-12-15 |
EP1485914A4 (en) | 2010-08-25 |
WO2003079350A1 (en) | 2003-09-25 |
RU2004127674A (en) | 2005-05-27 |
MXPA04008945A (en) | 2004-11-26 |
CN101566915A (en) | 2009-10-28 |
KR100833229B1 (en) | 2008-05-28 |
AU2003206156A1 (en) | 2003-09-29 |
TW591535B (en) | 2004-06-11 |
CN1643595A (en) | 2005-07-20 |
JP4101767B2 (en) | 2008-06-18 |
RU2316827C2 (en) | 2008-02-10 |
MY134823A (en) | 2007-12-31 |
KR20030075223A (en) | 2003-09-26 |
TW200304621A (en) | 2003-10-01 |
US20030174170A1 (en) | 2003-09-18 |
US7873914B2 (en) | 2011-01-18 |
US20070174779A1 (en) | 2007-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7873914B2 (en) | Multi-layer focusing method and apparatus therefor | |
US8432358B2 (en) | Methods and systems for enhancing television applications using 3D pointing | |
US20030001907A1 (en) | Method and apparatus for scrollable cross-point navigation in a user interface | |
KR101437198B1 (en) | Programmable on screen display and remote control | |
US20060136246A1 (en) | Hierarchical program guide | |
US20070046628A1 (en) | Apparatus and method for controlling user interface using jog dial and navigation key | |
KR100654448B1 (en) | Method and apparatus for providing user interface for searching contents | |
US20100058242A1 (en) | Menu display device and menu display method | |
KR20150004927A (en) | Electronic program guide with digital storage | |
KR20070072516A (en) | Apparatus for enabling to control at least one meadia data processing device, and method thereof | |
WO1997049242A1 (en) | System and method for using television schedule information | |
WO2008027321A2 (en) | Television control, playlist generation and dvr systems and methods | |
US20050213753A1 (en) | User interface of electronic apparatus and method for controlling the user interface | |
KR101177453B1 (en) | User interface method activating a clickable object and apparatus providing user interface method thereof | |
JPH07175816A (en) | Video associative retrieving device and method | |
EP2028587A1 (en) | Method and device for navigating a graphical user interface | |
JP2005515538A (en) | Technology that provides an unrelated graphical user interface that is not conspicuous in the background when selecting options | |
JP3551112B2 (en) | Multimedia scenario editing apparatus and recording medium recording multimedia scenario editing program | |
JP4383285B2 (en) | Icon display device and icon display method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
FZDE | Discontinued | ||
FZDE | Discontinued |
Effective date: 20100920 |