US20090063775A1 - Instrument, a system and a container for provisioning a device for personal care treatment, and a device for personal care treatment with such a container - Google Patents

Instrument, a system and a container for provisioning a device for personal care treatment, and a device for personal care treatment with such a container Download PDF

Info

Publication number
US20090063775A1
US20090063775A1 US12/293,312 US29331206A US2009063775A1 US 20090063775 A1 US20090063775 A1 US 20090063775A1 US 29331206 A US29331206 A US 29331206A US 2009063775 A1 US2009063775 A1 US 2009063775A1
Authority
US
United States
Prior art keywords
passage
container
filling needle
valve body
filling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/293,312
Other versions
US8281824B2 (en
Inventor
Jeroen Molema
Wilko Westerhof
Bartele Henrik De Vries
Reinier Niels Lap
Olaf Martin De Jong
Bart-Jan Zwart
Johannes Rogier De Vrind
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips Electronics NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics NV filed Critical Koninklijke Philips Electronics NV
Assigned to KONINKLIJKE PHILIPS ELECTRONICS N V reassignment KONINKLIJKE PHILIPS ELECTRONICS N V ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DE JONG, OLAF MARTIN, DE VRIES, BARTELE HENDRIK, DE VRIND, JOHANNES ROGIER, LAP, REINIER NIELS, MOLEMA, JEROEN, WESTERHOF, WILKO, ZWART, BART-JAN
Publication of US20090063775A1 publication Critical patent/US20090063775A1/en
Application granted granted Critical
Publication of US8281824B2 publication Critical patent/US8281824B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K15/00Arrangement in connection with fuel supply of combustion engines or other fuel consuming energy converters, e.g. fuel cells; Mounting or construction of fuel tanks
    • B60K15/03Fuel tanks
    • B60K15/04Tank inlets
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C17/00Devices for cleaning, polishing, rinsing or drying teeth, teeth cavities or prostheses; Saliva removers; Dental appliances for receiving spittle
    • A61C17/16Power-driven cleaning or polishing devices
    • A61C17/22Power-driven cleaning or polishing devices with brushes, cushions, cups, or the like
    • A61C17/32Power-driven cleaning or polishing devices with brushes, cushions, cups, or the like reciprocating or oscillating
    • A61C17/34Power-driven cleaning or polishing devices with brushes, cushions, cups, or the like reciprocating or oscillating driven by electric motor
    • A61C17/36Power-driven cleaning or polishing devices with brushes, cushions, cups, or the like reciprocating or oscillating driven by electric motor with rinsing means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B26HAND CUTTING TOOLS; CUTTING; SEVERING
    • B26BHAND-HELD CUTTING TOOLS NOT OTHERWISE PROVIDED FOR
    • B26B19/00Clippers or shavers operating with a plurality of cutting edges, e.g. hair clippers, dry shavers
    • B26B19/38Details of, or accessories for, hair clippers, or dry shavers, e.g. housings, casings, grips, guards
    • B26B19/40Lubricating
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10TTECHNICAL SUBJECTS COVERED BY FORMER US CLASSIFICATION
    • Y10T137/00Fluid handling
    • Y10T137/8593Systems
    • Y10T137/87917Flow path with serial valves and/or closures
    • Y10T137/87925Separable flow path section, valve or closure in each
    • Y10T137/87941Each valve and/or closure operated by coupling motion
    • Y10T137/87949Linear motion of flow path sections operates both
    • Y10T137/87957Valves actuate each other

Definitions

  • the present invention relates in general to a data processing system comprising cache storage, and more specifically relates to dynamic partitioning of the cache storage for application tasks in a multiprocessor.
  • Cache partitioning is a well-known technique in multi-tasking systems for achieving more predictable cache performance by reducing resource interference.
  • the cache storage is shared between multiple processes or application tasks.
  • the cache storage is partitioned into different sections for different application tasks.
  • cache partitioning may result in small sized partitions per application tasks, as the total cache size is limited. This will cause performance deterioration, as the application tasks will not be able to accommodate its working set in the allotted cache partition which causes more cache misses. It can be advantageous to partition the cache into sections, where each section is allocated to a respective class of processes, rather than the processes sharing entire cache storage.
  • US Patent application 2002/0002657A1 by Henk Muller et al discloses a method of operating a cache memory in a system, in which a processor is capable of executing a plurality of processes. Such techniques partition the cache into many small partitions instead of using one monolithic data-cache in which, accesses to different data objects may interfere. In such cases, typically the compiler is aware of the cache architecture, and allocates the cache partitions to the application tasks.
  • the invention is based on the recognition that the prior art techniques do not exploit the pattern of execution of the application tasks. For instance, the execution behavior of multimedia applications often follows a periodic pattern of execution. I.e. the multimedia applications include application tasks that get scheduled periodically and follow a pattern of execution. By exploiting this behavior it is possible to have more efficient cache partitioning.
  • a cache partitioning technique for application tasks based on the scheduling information in multiprocessors is provided. Cache partitioning is performed dynamically based on the information of the pattern of task scheduling provided by the task scheduler. Execution behavior of the application tasks is obtained from the task scheduler and partitions are allocated to only a subset of application tasks, which are going to be executed in the upcoming clock cycles.
  • the present invention will improve the cache utilization by avoiding unnecessary reservation of the cache partitions for the executing application tasks during the entire duration of their execution and hence an effective utilization of the cache is achieved.
  • a method for dynamically partitioning a cache memory in a multiprocessor for a set of application tasks includes the steps of storing a scheduling pattern of the set of application tasks in a storage, selecting a subset of application tasks from the set of application tasks and updating a cache controller logic with the subset of application tasks, where the subset of application tasks comprise a set of application tasks which are executed in the upcoming clock cycles, and allocating cache partitions dynamically to the subset of application tasks updated in the cache controller logic.
  • a task scheduler stores the scheduling pattern of the application tasks in a look up table (LUT).
  • the selected subset of application tasks are updated in a partition control register and a dynamic partition unit allocates cache partitions dynamically to the subset of application tasks stored in the partition control register.
  • a system for dynamically partitioning a cache memory in a multiprocessor for a set of application tasks.
  • the system includes a task scheduler for storing a scheduling pattern of the set of application tasks, and cache controller logic for selecting and updating a subset of application tasks from the set of application tasks, and for allocating cache partitions dynamically to the subset of application tasks updated in the cache controller logic.
  • the cache controller logic includes a partition control register for updating the subset of application tasks, and a dynamic partition unit for allocating cache partitions dynamically to the subset of application tasks.
  • FIG. 1 is a flow diagram illustrating an embodiment of a method for dynamically partitioning a cache memory in a multiprocessor according to an embodiment of the present invention.
  • FIG. 2 illustrates an embodiment of a basic architecture of a data processing system.
  • FIG. 3 shows a table of an example of the task scheduling pattern of the application tasks.
  • FIG. 4 is a block diagram illustrating a dynamic partitioning system embodiment of the present invention.
  • the present invention proposes a cache partitioning technique based on the patterns of execution of the application tasks.
  • the partitions are allocated to only a subset of application tasks which are going to be executed in the upcoming clock cycles. Since, considering only a subset of application tasks for cache portioning, large sized partitions can be allotted for the application tasks.
  • FIG. 1 is a flow diagram illustrating the method for dynamically partitioning a cache memory in a multiprocessor according to an embodiment 100 of the present invention.
  • the scheduling pattern of the set of the applications tasks is stored in a look-up table (LUT).
  • the application tasks may be defined as elementary operations (like hardware operations or computer instructions) or can be an ensemble of elementary operations (like software programs).
  • the application tasks follow a periodic pattern of execution. This scheduling pattern is stored in the LUT.
  • a subset of application tasks is selected from the set of application tasks.
  • the subset of application tasks includes the application tasks which are going to be executed in the upcoming clock cycles.
  • a partition control register is updated with the subset of application tasks selected in step 110 .
  • a partition control register may be implemented as a memory mapped input output (MMIO) register.
  • a task scheduler updates the partition control register with the selected subset of application tasks.
  • a dynamic partitioning unit allocates cache partitions dynamically to the subset of application tasks updated in the partition control register.
  • FIG. 2 illustrates an embodiment of a basic architecture of a data processing system 200 .
  • the data processing system 200 includes a processing unit 210 , a cache memory 220 , cache controller logic 215 , a data bus 230 , a main memory 225 and a task scheduler 205 .
  • the scheduler can be implemented either in software or hardware. If it is a software one, it could be running on the processing unit 210 . In the case of a hardware implementation, the scheduler is a separate unit as a task scheduler 205 which is coupled to the processing unit 210 .
  • the main memory 225 is coupled to the data bus 230 .
  • the cache memory 220 and the cache controller logic 215 are as well coupled to the data bus 230 .
  • the processing unit 210 is coupled to the cache memory 220 .
  • the task scheduler 205 is coupled to the data bus 230 as well as to the processing unit 210 and the cache controller logic 215 .
  • Such a data processing system 200 may be implemented as a system-on-chip (SoC).
  • SoC system-on-chip
  • the data processing system 200 explained above is particularly applicable to multi-tasking streaming applications, for example in audio and video applications.
  • FIG. 3 shows a table of an example of the task scheduling pattern of the application tasks 300 .
  • a particular repetition pattern of execution of application tasks is termed as a scheduling pattern.
  • This scenario is common in multimedia/streaming applications where the constituent application tasks follow a repetitive pattern of scheduling (and hence execution).
  • T 1 , T 2 , and T 3 The top row indicates the task IDs TID and the bottom row indicates the schedule instance SI ( 1 - 14 ). From the figure it can be seen that the scheduling follows a recurring pattern (T 1 , T 2 , T 1 , T 2 , T 1 , T 2 , T 3 ), which can be tracked by the task scheduler at run time. Task scheduler stores this scheduling information dynamically using the LUT.
  • the proposed cache partition policy considers only a subset of the tasks for allocating the partitions.
  • the suitable subset is (T 1 , T 2 ) as the task T 3 occurs only in the scheduling instance 7 and hence the partition for task T 3 can be allocated at a later time (i.e. by schedule instance 7 ).
  • the entire cache partition is allocated for T 1 and T 2 for most of the execution time (schedule instance 1 - 6 ), a more efficient cache partitioning is achieved.
  • a subset of cache partition occupied by either T 1 or T 2 (according to some cache replacement policy like least recently used (LRU) can be evicted to accommodate the partition for T 3 .
  • LRU least recently used
  • FIG. 4 is a block diagram illustrating a dynamic partitioning system embodiment of the present invention 400 .
  • the figure shows a part of the basic data processing system as in FIG. 2 .
  • the figure serves to explain the relationship between the task scheduler 205 ( 405 in FIG. 4 ) and the cache controller logic 215 ( 415 in FIG. 4 ) for performing the dynamic cache partitioning according to the present invention.
  • the system includes a task scheduler 205 , a look up table (LUT) 410 , cache controller logic 215 ( 415 in FIG. 4 ), a dynamic partitioning unit 420 , and a partition control register 425 .
  • LUT look up table
  • the task scheduler 205 stores the task schedule pattern in the form of LUT 410 .
  • the cache controller logic 215 includes a partition control register 425 and a dynamic partition unit 420 .
  • the partition control register 425 is updated by the task scheduler 205 .
  • the partition control register 425 contains information regarding which are going to be executed in the upcoming clock cycles. This information includes the task IDs of the application tasks. I.e. according to the example in FIG. 2 , at an arbitrary schedule instance 1 , the partition control register 425 includes only task IDs corresponding to T 1 and T 2 . Again at say schedule instance 6 , the partition control register 425 includes application tasks T 1 , T 2 and T 3 , which imply that a partition has to be allocated to T 3 also.
  • Dynamic partition unit 420 reads the information from the partition control register 425 and allocates partitions to only application tasks which have IDs registered in the partition control register 425 . In this way only a subset of application tasks are selected for allocating cache partitions and hence effectively utilizing the available cache space across the application tasks.
  • the present invention will find its industrial applications in system on chip (SoC) for audio, video and mobile applications.
  • SoC system on chip
  • the present invention will improve the cache utilization by avoiding unnecessary reservation of the cache partitions for the executing application tasks during the entire duration of their execution. So an effective utilization of the cache storage is achieved.
  • any reference signs placed between parentheses shall not be construed as limiting the claim.
  • the word “comprising” does not exclude the presence of elements or steps other than those listed in a claim.
  • the word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements.
  • the invention may be implemented by means of hardware comprising several distinct elements, and/or by means of a suitably programmed processor. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Abstract

The present invention provides a system and a method for a cache partitioning technique for application tasks based on the scheduling information in multiprocessors. Cache partitioning is performed dynamically based on the information of the pattern of task scheduling provided by the task scheduler (405). Execution behavior of the application tasks is obtained from the task scheduler (405) and partitions are allocated (415) to only a subset of application tasks, which are going to be executed in the upcoming clock cycles. The present invention will improve the cache utilization by avoiding unnecessary reservation of the cache partitions for the executing application tasks during the entire duration of their execution and hence an effective utilization of the cache is achieved.

Description

    FIELD OF THE INVENTION
  • The present invention relates in general to a data processing system comprising cache storage, and more specifically relates to dynamic partitioning of the cache storage for application tasks in a multiprocessor.
  • BACKGROUND OF THE INVENTION
  • Cache partitioning is a well-known technique in multi-tasking systems for achieving more predictable cache performance by reducing resource interference. In a data processing system comprising of multiprocessors, the cache storage is shared between multiple processes or application tasks. The cache storage is partitioned into different sections for different application tasks. In a multiprocessing system with large number of application tasks, cache partitioning may result in small sized partitions per application tasks, as the total cache size is limited. This will cause performance deterioration, as the application tasks will not be able to accommodate its working set in the allotted cache partition which causes more cache misses. It can be advantageous to partition the cache into sections, where each section is allocated to a respective class of processes, rather than the processes sharing entire cache storage.
  • US Patent application 2002/0002657A1 by Henk Muller et al discloses a method of operating a cache memory in a system, in which a processor is capable of executing a plurality of processes. Such techniques partition the cache into many small partitions instead of using one monolithic data-cache in which, accesses to different data objects may interfere. In such cases, typically the compiler is aware of the cache architecture, and allocates the cache partitions to the application tasks.
  • Future multiprocessor systems are going to be very complex and will contain a large number of application tasks. Hence the cache partitioning will result in small sized partitions per tasks, which will deteriorate the performance.
  • SUMMARY OF THE INVENTION
  • It is, inter alia, an object of the invention to provide system and method for improved dynamic cache partitioning in multiprocessors. The invention is defined by the independent claims. Advantageous embodiments are defined in the dependent claims.
  • The invention is based on the recognition that the prior art techniques do not exploit the pattern of execution of the application tasks. For instance, the execution behavior of multimedia applications often follows a periodic pattern of execution. I.e. the multimedia applications include application tasks that get scheduled periodically and follow a pattern of execution. By exploiting this behavior it is possible to have more efficient cache partitioning.
  • A cache partitioning technique for application tasks based on the scheduling information in multiprocessors is provided. Cache partitioning is performed dynamically based on the information of the pattern of task scheduling provided by the task scheduler. Execution behavior of the application tasks is obtained from the task scheduler and partitions are allocated to only a subset of application tasks, which are going to be executed in the upcoming clock cycles. The present invention will improve the cache utilization by avoiding unnecessary reservation of the cache partitions for the executing application tasks during the entire duration of their execution and hence an effective utilization of the cache is achieved.
  • In an example embodiment of the present invention, a method for dynamically partitioning a cache memory in a multiprocessor for a set of application tasks is provided. The method includes the steps of storing a scheduling pattern of the set of application tasks in a storage, selecting a subset of application tasks from the set of application tasks and updating a cache controller logic with the subset of application tasks, where the subset of application tasks comprise a set of application tasks which are executed in the upcoming clock cycles, and allocating cache partitions dynamically to the subset of application tasks updated in the cache controller logic. A task scheduler stores the scheduling pattern of the application tasks in a look up table (LUT). The selected subset of application tasks are updated in a partition control register and a dynamic partition unit allocates cache partitions dynamically to the subset of application tasks stored in the partition control register.
  • In another example embodiment of the present invention, a system is provided for dynamically partitioning a cache memory in a multiprocessor for a set of application tasks. The system includes a task scheduler for storing a scheduling pattern of the set of application tasks, and cache controller logic for selecting and updating a subset of application tasks from the set of application tasks, and for allocating cache partitions dynamically to the subset of application tasks updated in the cache controller logic. The cache controller logic includes a partition control register for updating the subset of application tasks, and a dynamic partition unit for allocating cache partitions dynamically to the subset of application tasks.
  • The above summary of the present invention is not intended to represent each disclosed embodiment, or every aspect, of the present invention. Other aspects and example embodiments are provided in the figures and the detailed description that follows.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a flow diagram illustrating an embodiment of a method for dynamically partitioning a cache memory in a multiprocessor according to an embodiment of the present invention.
  • FIG. 2 illustrates an embodiment of a basic architecture of a data processing system.
  • FIG. 3 shows a table of an example of the task scheduling pattern of the application tasks.
  • FIG. 4 is a block diagram illustrating a dynamic partitioning system embodiment of the present invention.
  • DESCRIPTION OF EMBODIMENTS
  • The present invention proposes a cache partitioning technique based on the patterns of execution of the application tasks. The partitions are allocated to only a subset of application tasks which are going to be executed in the upcoming clock cycles. Since, considering only a subset of application tasks for cache portioning, large sized partitions can be allotted for the application tasks.
  • FIG. 1 is a flow diagram illustrating the method for dynamically partitioning a cache memory in a multiprocessor according to an embodiment 100 of the present invention. At step 105, the scheduling pattern of the set of the applications tasks is stored in a look-up table (LUT). The application tasks may be defined as elementary operations (like hardware operations or computer instructions) or can be an ensemble of elementary operations (like software programs). The application tasks follow a periodic pattern of execution. This scheduling pattern is stored in the LUT. At 110, a subset of application tasks is selected from the set of application tasks. The subset of application tasks includes the application tasks which are going to be executed in the upcoming clock cycles.
  • At step 115, a partition control register is updated with the subset of application tasks selected in step 110. A partition control register may be implemented as a memory mapped input output (MMIO) register. A task scheduler updates the partition control register with the selected subset of application tasks. At step 120, a dynamic partitioning unit allocates cache partitions dynamically to the subset of application tasks updated in the partition control register.
  • FIG. 2 illustrates an embodiment of a basic architecture of a data processing system 200. The data processing system 200 includes a processing unit 210, a cache memory 220, cache controller logic 215, a data bus 230, a main memory 225 and a task scheduler 205. The scheduler can be implemented either in software or hardware. If it is a software one, it could be running on the processing unit 210. In the case of a hardware implementation, the scheduler is a separate unit as a task scheduler 205 which is coupled to the processing unit 210. The main memory 225 is coupled to the data bus 230. The cache memory 220 and the cache controller logic 215 are as well coupled to the data bus 230. The processing unit 210 is coupled to the cache memory 220. The task scheduler 205 is coupled to the data bus 230 as well as to the processing unit 210 and the cache controller logic 215.
  • Such a data processing system 200 may be implemented as a system-on-chip (SoC). The data processing system 200 explained above is particularly applicable to multi-tasking streaming applications, for example in audio and video applications.
  • FIG. 3 shows a table of an example of the task scheduling pattern of the application tasks 300. A particular repetition pattern of execution of application tasks is termed as a scheduling pattern. This scenario is common in multimedia/streaming applications where the constituent application tasks follow a repetitive pattern of scheduling (and hence execution). Consider an example periodic execution pattern of tasks T 1, T2, and T3. The top row indicates the task IDs TID and the bottom row indicates the schedule instance SI (1-14). From the figure it can be seen that the scheduling follows a recurring pattern (T1, T2, T1, T2, T1, T2, T3), which can be tracked by the task scheduler at run time. Task scheduler stores this scheduling information dynamically using the LUT. The proposed cache partition policy considers only a subset of the tasks for allocating the partitions.
  • In this case the suitable subset is (T1, T2) as the task T3 occurs only in the scheduling instance 7 and hence the partition for task T3 can be allocated at a later time (i.e. by schedule instance 7). As the entire cache partition is allocated for T1 and T2 for most of the execution time (schedule instance 1-6), a more efficient cache partitioning is achieved. At schedule instance 7, a subset of cache partition occupied by either T1 or T2 (according to some cache replacement policy like least recently used (LRU) can be evicted to accommodate the partition for T3.
  • FIG. 4 is a block diagram illustrating a dynamic partitioning system embodiment of the present invention 400. The figure shows a part of the basic data processing system as in FIG. 2. The figure serves to explain the relationship between the task scheduler 205 (405 in FIG. 4) and the cache controller logic 215 (415 in FIG. 4) for performing the dynamic cache partitioning according to the present invention. The system includes a task scheduler 205, a look up table (LUT) 410, cache controller logic 215 (415 in FIG. 4), a dynamic partitioning unit 420, and a partition control register 425.
  • The task scheduler 205, stores the task schedule pattern in the form of LUT 410. The cache controller logic 215 includes a partition control register 425 and a dynamic partition unit 420. The partition control register 425 is updated by the task scheduler 205. The partition control register 425 contains information regarding which are going to be executed in the upcoming clock cycles. This information includes the task IDs of the application tasks. I.e. according to the example in FIG. 2, at an arbitrary schedule instance 1, the partition control register 425 includes only task IDs corresponding to T1 and T2. Again at say schedule instance 6, the partition control register 425 includes application tasks T1, T2 and T3, which imply that a partition has to be allocated to T3 also.
  • Dynamic partition unit 420 reads the information from the partition control register 425 and allocates partitions to only application tasks which have IDs registered in the partition control register 425. In this way only a subset of application tasks are selected for allocating cache partitions and hence effectively utilizing the available cache space across the application tasks.
  • The present invention will find its industrial applications in system on chip (SoC) for audio, video and mobile applications. The present invention will improve the cache utilization by avoiding unnecessary reservation of the cache partitions for the executing application tasks during the entire duration of their execution. So an effective utilization of the cache storage is achieved.
  • It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word “comprising” does not exclude the presence of elements or steps other than those listed in a claim. The word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and/or by means of a suitably programmed processor. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims (20)

1. An instrument (23) for filling a substance supply container (1) of a device (21) for personal care treatment, the instrument comprising:
a filling needle (4) having a distal end and a proximal end suitable for connection to a storage reservoir (22), a substance transport conduit (9) extending from the proximal end to at least one outlet (10, 11) provided in a radially facing surface portion closely adjacent the distal end of the filling needle (4), and
a sleeve (14, 15) around the filling needle (4) movable between a first position closing off the at least one outlet (10, 11) and a second position proximally away from the first position and the at least one outlet (10, 11).
2. An instrument (23) according to claim 1, wherein the sleeve (14, 15) has a distal end face arranged for sealingly contacting an inlet end of a passage (5) through a refill socket (3) of the substance supply container (1) to be filled.
3. An instrument (23) according to claim 1, further comprising a spring for urging the sleeve (14, 15) towards the first position.
4. An instrument (23) according to claim 2, wherein the distal end of the filling needle (4) has a frontal face (16) having at least one projection (18) or one recess extending in the axial direction of the filling needle (4) for releasable engagement with, respectively, a recess (8) or a projection of a valve body (7) that closes off the passage (5).
5. An instrument (23) according to claim 4, wherein the at least one outlet (10, 11) is located directly adjacent the frontal face (16).
6. An instrument (23) according to claim 1, further comprising a storage reservoir (22) connected to the proximal end of the filling needle (4).
7. An instrument (23′) according to claim 1, further comprising a coupling device (32) for releasably connecting the proximal end of the filling needle (4) to the storage reservoir (22′).
8. An instrument (23′) according to claim 7, further comprising a first holding member for holding the device for personal care treatment and a second holding member for holding the storage reservoir (22′).
9. A system (20) for provisioning a device (21) for personal care treatment with a substance to be dispensed from the device (21), said system (20) comprising:
an instrument (23) according to claim 1; and
a container (1) bounding a storage area (2) for storing the substance, said container (1) being connected or connectable to a substance dispensing structure of the device (21), and having a refill socket (3) for connecting the container (1) to the filling needle (4) in a filling position for filling the container (1), said refill socket (3) comprising:
a passage (5) extending from an inlet end outside the storage area (2) to a free space (6) in or communicating with the storage area (2), said passage (5) having a narrowest section, said free space (6) inside of the narrowest section extending radially from the narrowest section of the passage (5) or from a continuation of the narrowest section of the passage (5), for receiving a substance flow out of the at least one outlet (10, 11) of the filling needle (4) in the filling position, and
a valve body (7) displaceable between a closed position and an opened position, the valve body (7) in the closed position closing off the passage (5), and the valve body (7) in the opened position being inwardly spaced from the narrowest section of the passage (5) and contacting the filling needle (4) in the filling position extending through the passage (5) and holding the valve body (7) in the opened position,
wherein the sleeve (14, 15) contacts the inlet end of the passage (5) when the filling needle (4) is in a position projecting into the passage (5).
10. A system (20) according to claim 9, wherein the sleeve (14, 15) sealingly contacts the inlet end of the passage (5) when the filling needle (4) is in a position projecting into the passage (5).
11. A system (20) according to claim 9, wherein the narrowest section of said passage (5) has a cross-section providing a sealing fit with a tip end of the filling needle (4) when the tip end of the filling needle (4) in the filling position extends in said passage (5).
12. A system (20) according to claim 9, wherein, in the closed position, the valve body (7) extends at least to the inlet end of the passage (5) when the filling needle (4) is retracted from the passage (5).
13. A system (20) according to claim 9, wherein the distal end of the filling needle (4) has a frontal face (16) having at least one projection (18) or at least one recess extending in the axial direction of the filling needle (4), and wherein the valve body (7) has, respectively, at least one recess (8) or at least one projection on its outside in releasable engagement with, respectively, the projection (18) or the recess of the filling needle (4) when the filling needle (4) projects into the passage (5).
14. A container (1), for cooperation with an instrument (23) according to claim 1, for holding a supply of a substance to be dispensed from a device (21) for personal care treatment, said container (1) bounding a storage area (2) for storing the substance, said container (1) being connected or connectable to a substance dispensing structure of the device (21) for personal care treatment, and having a refill socket (3) for connecting the container (1) to the filling needle (4) of the instrument (23) in a filling position for filling the container (1), said refill socket (3) comprising:
a passage (5) extending from an inlet end outside the storage area (2) to a free space (6) in or communicating with the storage area (2), said passage (5) having a narrowest section, said free space (6) inside of the narrowest section extending radially from the narrowest section of the passage (5) or from a continuation of the narrowest section of the passage (5), for receiving a substance flow out of the at least one outlet (10, 11) of the filling needle (4) when the filling needle (4) extends into said free space (6), and
a valve body (7) displaceable between a closed position and an opened position, the valve body (7) in the closed position closing off the passage (5) and in the opened position being inwardly spaced from at least the narrowest section of the passage (5), the valve body (7) being arranged for contacting the filling needle (4) when the filling needle (4) extends into said free space (6) for holding the valve body (7) in the opened position,
wherein the narrowest section is bounded by a wall (19) of a material that is softer than a material of the valve body (7).
15. A container (1) according to claim 14, wherein the softer material is an elastomer material.
16. A container (1) according to claim 14, wherein the valve body (7) in its closed position extends at least to the inlet end of the passage (5).
17. A container (1) according to claim 14, wherein the valve body (7) has at least one recess (8) or at least one projection on its outside for releasable engagement with, respectively, a projection (18) on or a recess in the filling needle (4).
18. A container (1) according to claim 14, further comprising a spring (13) contacting the valve body (7) and provided for urging the valve body (7) to the closed position.
19. A container (1) according to claim 18, wherein the spring (13) is spirally wound and comprises windings extending around the valve body (7) at least when the valve body (7) is in its open position.
20. A device (21) for personal care treatment comprising a container (1) according to claim 14.
US12/293,312 2006-03-24 2006-09-20 Instrument, a system and a container for provisioning a device for personal care treatment, and a device for personal care treatment with such a container Expired - Fee Related US8281824B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
EP06111711.5 2006-03-24
EP06111711 2006-03-24
EP06111711 2006-03-24
PCT/IB2006/053404 WO2007110714A1 (en) 2006-03-24 2006-09-20 An instrument, a system and a container for refilling a device for personal care treatement, and a device for personal care treatement with such a container

Publications (2)

Publication Number Publication Date
US20090063775A1 true US20090063775A1 (en) 2009-03-05
US8281824B2 US8281824B2 (en) 2012-10-09

Family

ID=37745858

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/293,312 Expired - Fee Related US8281824B2 (en) 2006-03-24 2006-09-20 Instrument, a system and a container for provisioning a device for personal care treatment, and a device for personal care treatment with such a container
US13/606,593 Expired - Fee Related US8678051B2 (en) 2006-03-24 2012-09-07 Instrument, a system and a container for provisioning a device for personal care treatment, and a device for personal care treatment with such a container

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/606,593 Expired - Fee Related US8678051B2 (en) 2006-03-24 2012-09-07 Instrument, a system and a container for provisioning a device for personal care treatment, and a device for personal care treatment with such a container

Country Status (5)

Country Link
US (2) US8281824B2 (en)
EP (1) EP2001640A1 (en)
JP (1) JP4965641B2 (en)
CN (1) CN101466508B (en)
WO (1) WO2007110714A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120198158A1 (en) * 2009-09-17 2012-08-02 Jari Nikara Multi-Channel Cache Memory

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
HK1077154A2 (en) 2003-12-30 2006-02-03 Vasogen Ireland Ltd Valve assembly
US7998134B2 (en) 2007-05-16 2011-08-16 Icu Medical, Inc. Medical connector
US20070088293A1 (en) 2005-07-06 2007-04-19 Fangrow Thomas F Jr Medical connector with closeable male luer
CN101466508B (en) * 2006-03-24 2011-06-29 皇家飞利浦电子股份有限公司 An instrument, a system and a container for refilling a device for personal care treatement, and a device for personal care treatement with such a container
US11229746B2 (en) 2006-06-22 2022-01-25 Excelsior Medical Corporation Antiseptic cap
US9259535B2 (en) 2006-06-22 2016-02-16 Excelsior Medical Corporation Antiseptic cap equipped syringe
ATE504223T1 (en) * 2007-01-18 2011-04-15 Eveready Battery Inc SHAVING SYSTEM WITH GAS GENERATING CELL
US9849276B2 (en) 2011-07-12 2017-12-26 Pursuit Vascular, Inc. Method of delivering antimicrobial to a catheter
US9078992B2 (en) 2008-10-27 2015-07-14 Pursuit Vascular, Inc. Medical device for applying antimicrobial to proximal end of catheter
US9168366B2 (en) 2008-12-19 2015-10-27 Icu Medical, Inc. Medical connector with closeable luer connector
GB201007226D0 (en) * 2010-04-30 2010-06-16 Reckitt & Colman Overseas A combination of a liquid container and a reill device
WO2012162259A2 (en) 2011-05-20 2012-11-29 Excelsior Medical Corporation Caps for cannula access devices
US10166381B2 (en) 2011-05-23 2019-01-01 Excelsior Medical Corporation Antiseptic cap
EP3760275A1 (en) 2011-09-09 2021-01-06 ICU Medical, Inc. Medical connectors with fluid-resistant mating interfaces
AU2015252808B2 (en) 2014-05-02 2019-02-21 Excelsior Medical Corporation Strip package for antiseptic cap
EP3294404A4 (en) 2015-05-08 2018-11-14 ICU Medical, Inc. Medical connectors configured to receive emitters of therapeutic agents
ES2929769T3 (en) 2016-10-14 2022-12-01 Icu Medical Inc Disinfectant caps for medical connectors
WO2018204206A2 (en) 2017-05-01 2018-11-08 Icu Medical, Inc. Medical fluid connectors and methods for providing additives in medical fluid lines
US11400195B2 (en) 2018-11-07 2022-08-02 Icu Medical, Inc. Peritoneal dialysis transfer set with antimicrobial properties
US11517732B2 (en) 2018-11-07 2022-12-06 Icu Medical, Inc. Syringe with antimicrobial properties
US11541220B2 (en) 2018-11-07 2023-01-03 Icu Medical, Inc. Needleless connector with antimicrobial properties
US11534595B2 (en) 2018-11-07 2022-12-27 Icu Medical, Inc. Device for delivering an antimicrobial composition into an infusion device
US11541221B2 (en) 2018-11-07 2023-01-03 Icu Medical, Inc. Tubing set with antimicrobial properties
US11433215B2 (en) 2018-11-21 2022-09-06 Icu Medical, Inc. Antimicrobial device comprising a cap with ring and insert
AU2021396147A1 (en) 2020-12-07 2023-06-29 Icu Medical, Inc. Peritoneal dialysis caps, systems and methods

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3422864A (en) * 1965-12-22 1969-01-21 Allinquant Fernand St Self-locking connector for fluid transfer
US5758700A (en) * 1993-08-24 1998-06-02 Vanderploeg; Richard D. Bottle cap interlock
US6039301A (en) * 1997-04-22 2000-03-21 U.S. Philips Corporation Container and sealing device for use in the container
US6220772B1 (en) * 1999-01-13 2001-04-24 Optiva Corporation Fluid-dispensing and refilling system for a power toothbrush
US20040062591A1 (en) * 1999-01-13 2004-04-01 Hall Scott E. Fluid-dispensing and refilling system for a power toothbrush
US20050004498A1 (en) * 2003-07-03 2005-01-06 Michael Klupt Dental hygiene device
US7264026B2 (en) * 2001-02-12 2007-09-04 Koninklijke Philips Electronics N.V. Refill and storage holder for personal care appliance
US7703486B2 (en) * 2006-06-06 2010-04-27 Cardinal Health 414, Inc. Method and apparatus for the handling of a radiopharmaceutical fluid
US20100252143A1 (en) * 2009-04-01 2010-10-07 Victor Air Tools Co. Ltd. Filling structure of a painting device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE1247900B (en) 1963-01-21 1967-08-17 Sunbeam Corp Electric shaver
FR2382319A1 (en) 1977-03-04 1978-09-29 Huet Joel Electric razor with shaving lotion spray - has attached supple sac containing lotion and nozzles by cutting head
DE9117141U1 (en) 1990-11-15 1996-04-18 Henkel Kgaa Refill cartridge for pen sleeve
DE19801111A1 (en) 1998-01-15 1999-07-22 Stern Hans Jakob Toothbrush with container for toothpaste and/or water integrated in handle
TR200103559T2 (en) * 1999-06-10 2002-04-22 Unilever N. V. Valve connection mechanism
GB2399005A (en) 2003-03-07 2004-09-08 Dirk Earl Bovell-Henry Pump operated toothbrush with refillable cartridge
CN101466508B (en) * 2006-03-24 2011-06-29 皇家飞利浦电子股份有限公司 An instrument, a system and a container for refilling a device for personal care treatement, and a device for personal care treatement with such a container

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3422864A (en) * 1965-12-22 1969-01-21 Allinquant Fernand St Self-locking connector for fluid transfer
US5758700A (en) * 1993-08-24 1998-06-02 Vanderploeg; Richard D. Bottle cap interlock
US6039301A (en) * 1997-04-22 2000-03-21 U.S. Philips Corporation Container and sealing device for use in the container
US6220772B1 (en) * 1999-01-13 2001-04-24 Optiva Corporation Fluid-dispensing and refilling system for a power toothbrush
US20040062591A1 (en) * 1999-01-13 2004-04-01 Hall Scott E. Fluid-dispensing and refilling system for a power toothbrush
US7264026B2 (en) * 2001-02-12 2007-09-04 Koninklijke Philips Electronics N.V. Refill and storage holder for personal care appliance
US20050004498A1 (en) * 2003-07-03 2005-01-06 Michael Klupt Dental hygiene device
US7703486B2 (en) * 2006-06-06 2010-04-27 Cardinal Health 414, Inc. Method and apparatus for the handling of a radiopharmaceutical fluid
US20100252143A1 (en) * 2009-04-01 2010-10-07 Victor Air Tools Co. Ltd. Filling structure of a painting device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120198158A1 (en) * 2009-09-17 2012-08-02 Jari Nikara Multi-Channel Cache Memory
US9892047B2 (en) * 2009-09-17 2018-02-13 Provenance Asset Group Llc Multi-channel cache memory

Also Published As

Publication number Publication date
US20130008562A1 (en) 2013-01-10
CN101466508B (en) 2011-06-29
JP4965641B2 (en) 2012-07-04
WO2007110714A1 (en) 2007-10-04
US8281824B2 (en) 2012-10-09
US8678051B2 (en) 2014-03-25
CN101466508A (en) 2009-06-24
EP2001640A1 (en) 2008-12-17
JP2009536836A (en) 2009-10-22

Similar Documents

Publication Publication Date Title
US20090063775A1 (en) Instrument, a system and a container for provisioning a device for personal care treatment, and a device for personal care treatment with such a container
US6658564B1 (en) Reconfigurable programmable logic device computer system
US7552304B2 (en) Cost-aware design-time/run-time memory management methods and apparatus
US20110113215A1 (en) Method and apparatus for dynamic resizing of cache partitions based on the execution phase of tasks
US7979680B2 (en) Multi-threaded parallel processor methods and apparatus
US9329867B2 (en) Register allocation for vectors
US6446258B1 (en) Interactive instruction scheduling and block ordering
JP2005505030A (en) Scheduling method in reconfigurable hardware architecture having multiple hardware configurations
US20030237080A1 (en) System and method for improved register allocation in an optimizing compiler
US9032411B2 (en) Logical extended map to demonstrate core activity including L2 and L3 cache hit and miss ratio
US20050081181A1 (en) System and method for dynamically partitioning processing across plurality of heterogeneous processors
US20070101326A1 (en) Dynamic change of thread contention scope assignment
Kapasi et al. Stream scheduling
Forsberg et al. HePREM: Enabling predictable GPU execution on heterogeneous SoC
Oh et al. FineReg: Fine-grained register file management for augmenting GPU throughput
CN114968549A (en) Method and apparatus for allocating resources to tasks
US8510529B2 (en) Method for generating program and method for operating system
Ausavarungnirun Techniques for shared resource management in systems with throughput processors
US9244828B2 (en) Allocating memory and using the allocated memory in a workgroup in a dispatched data parallel kernel
WO2008026142A1 (en) Dynamic cache partitioning
Voitsechov et al. Software-directed techniques for improved gpu register file utilization
US8473904B1 (en) Generation of cache architecture from a high-level language description
US20120137300A1 (en) Information Processor and Information Processing Method
Dümmler et al. Execution schemes for the NPB-MZ benchmarks on hybrid architectures: a comparative study
Jesshope et al. Evaluating CMPs and their memory architecture

Legal Events

Date Code Title Description
AS Assignment

Owner name: KONINKLIJKE PHILIPS ELECTRONICS N V, NETHERLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MOLEMA, JEROEN;WESTERHOF, WILKO;DE VRIES, BARTELE HENDRIK;AND OTHERS;REEL/FRAME:021542/0853

Effective date: 20071123

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20201009