US20140254343A1 - Peer to peer vibration mitigation - Google Patents
Peer to peer vibration mitigation Download PDFInfo
- Publication number
- US20140254343A1 US20140254343A1 US13/788,548 US201313788548A US2014254343A1 US 20140254343 A1 US20140254343 A1 US 20140254343A1 US 201313788548 A US201313788548 A US 201313788548A US 2014254343 A1 US2014254343 A1 US 2014254343A1
- Authority
- US
- United States
- Prior art keywords
- storage node
- chassis
- vibration
- performance degradation
- communication
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000000116 mitigating effect Effects 0.000 title abstract description 13
- 238000004891 communication Methods 0.000 claims abstract description 50
- 230000015556 catabolic process Effects 0.000 claims abstract description 48
- 238000006731 degradation reaction Methods 0.000 claims abstract description 48
- 230000009471 action Effects 0.000 claims description 24
- 238000000034 method Methods 0.000 claims description 12
- 230000003247 decreasing effect Effects 0.000 claims description 10
- 230000000977 initiatory effect Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 4
- 238000001816 cooling Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 9
- 230000009429 distress Effects 0.000 description 8
- 238000012546 transfer Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 101000606504 Drosophila melanogaster Tyrosine-protein kinase-like otk Proteins 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000003292 diminished effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B33/00—Constructional parts, details or accessories not provided for in the other groups of this subclass
- G11B33/14—Reducing influence of physical parameters, e.g. temperature change, moisture, dust
- G11B33/1406—Reducing the influence of the temperature
- G11B33/1413—Reducing the influence of the temperature by fluid cooling
- G11B33/142—Reducing the influence of the temperature by fluid cooling by air cooling
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/36—Monitoring, i.e. supervising the progress of recording or reproducing
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B33/00—Constructional parts, details or accessories not provided for in the other groups of this subclass
- G11B33/02—Cabinets; Cases; Stands; Disposition of apparatus therein or thereon
- G11B33/08—Insulation or absorption of undesired vibrations or sounds
Definitions
- Implementations described and claimed herein provide for a physical or logical drive operation interface in a distributed computing and storage environment over which communications may be initiated by storage nodes and/or system-level controllers to effectuate real-time corrections actions that mitigate system vibrations.
- FIG. 1 illustrates one implementation of a distributed computing system with peer-to-peer vibration mitigation capability.
- FIG. 2 illustrates another implementation of a distributed computing system with peer-to-peer vibration mitigation capability.
- FIG. 3 illustrates example operations for peer-to-peer vibration mitigation according to one implementation.
- FIG. 4 discloses a block diagram of a computer system suitable for implementing aspects of at least one implementation of a peer-to-peer vibration mitigation system.
- Vibration can be a cause of hard disc drive performance problems, particularly in systems containing multiple disc drives in the same enclosure. Vibrations can be caused by forces including without limitation a drive's own actuator moment, the activity of other drives in a system enclosure, other sources of vibration such as cooling fans, etc.
- a central host server may query or monitor one or more storage node logs to discover information relating to the amount of vibration experienced at each of the storage nodes.
- vibrational problems may be discovered after the fact or go unnoticed entirely if the central host server does not aggressively monitor or query the storage nodes over the primary data interface. Because primary data interfaces are typically very busy, aggressive monitoring of the storage nodes is not always feasible.
- storage nodes may be located in a chassis having chassis management electronics. Unlike a remote system host, the chassis management electronics typically have knowledge of the physical locations of the individual storage nodes. However, the chassis management electronics typically lack the ability to communicate with the storage nodes over the primary data interface.
- implementations of the systems described herein provide for a secondary communication interface over which the chassis management electronics and/or system storage nodes can initiate communications to effectuate performance-increasing system changes.
- FIG. 1 illustrates one implementation of a distributed computing system 100 with peer-to-peer vibration mitigation capability.
- the distributed computing system 100 has a central host server 106 that is communicatively coupled to a plurality of storage nodes (e.g., storage nodes 102 , 104 ) in the distributed computing system 100 .
- Each computing node includes one or more processing units (e.g., a processor 122 ) attached to one or more hard drive assemblies (e.g., an HDA 124 ).
- a cooling fan 126 cools one or more storage nodes in the distributed computing system 100 .
- the HDA 124 of each storage node performs storage-related tasks such as read and write operations, and the processor 122 of each storage node is configured to perform storage and/or computing tasks for the distributed computing system 100 .
- the HDA 124 typically includes an actuator arm that pivots about an axis of rotation to position a transducer head, located on the distal end of the arm, over a data track on a media disc.
- the movement of the actuator arm may be controlled by a voice coil motor, and a spindle motor may be used to rotate the media disc below the actuator arm.
- rotational vibrations experienced by the HDA 124 can result in unwanted rotation of the actuator arm about the arm's axis of rotation (e.g., in the cross-track direction). When severe enough, this unwanted rotation can knock the transducer head far enough off of a desired data track that a positional correction is required. Such events can contribute to diminished read and/or write performance in the HDA 124 and the distributed computing system 100 .
- Each HDA 124 in the distributed computing system 100 communicates with at least one processor 122 .
- the processor 122 is able to detect a position of the transducer head of the HDA 124 at any given time based on read sensor signals sent from the transducer head or servo pattern information that is detected by the transducer head and passed to the processor 122 .
- the processor 122 may detect that the drive is not tracking properly and take steps to correct the tracking
- the processor 122 may determine that the transducer head has hit an off-track limit when vibrations cause the transducer head to stray off of a desired data track. In such cases, the processor 122 may instruct the drive to halt the current reading or writing operation for one or more rotations of the disc so that the transducer head can be repositioned.
- the processor 122 of each of the storage nodes collects information from the HDAs 124 of each storage node regarding the degree to which the HDAs 124 are impacted by vibration.
- the processor 122 of each storage node measures I/O degradation attributable to vibration occurring at the storage node.
- the processor 122 may record I/O degradation information in one or more log files.
- the processor 122 of a storage node may record computation times related to each task or measurements of one or more vibration sensors in the log file.
- the processor 122 of each of the storage nodes is further configured to communicate performance degradation information to the host server 106 , other storage nodes in the system, and/or other system processing entities such as a chassis-level controller (e.g., the chassis-level controller 112 in chassis 108 ).
- performance degradation refers to I/O degradation attributable to system vibrations. Certain factors such as temperature, humidity, and altitude may make a storage node more susceptible to performance degradation.
- the storage nodes (e.g., storage nodes 102 , 104 ) illustrated in FIG. 1 are distributed across multiple chassis (e.g., a chassis 108 ) mounted on racks 128 , 130 . In some cases, one or more rack-mounted chassis racks may be kept in a cabinet.
- Each chassis 108 includes multiple storage nodes and a plurality of cooling fans (e.g., fan 126 ). In one implementation, the cooling fan 126 is positioned immediately behind a vertical stack of three storage nodes. In another implementation, one or more of the processors 122 of storage nodes in close physical proximity to the cooling fan 126 controls the cooling fan 126 . In yet another implementation, a processor 122 of the chassis-level controller 112 controls the cooling fan 126 .
- the storage nodes may be distributed in a variety of configurations employing any number of racks, chassis, or fans.
- the distributed computing system 100 includes storage nodes at two separate physical locations (for example, in different facilities).
- each chassis e.g., the chassis 108
- each chassis includes one or more temperature, humidity, or GPS sensors.
- the host server 106 resides on a single, rack-mounted computer server 120 .
- the host server 106 may be distributed across one or more of the processors 122 of the storage nodes or across other processors or other systems of processors.
- the host server 106 may be communicatively coupled to the processors of the storage nodes and/or processors of one more of the chassis-level controllers.
- the host server 106 has the ability to initiate, receive, and/or respond to communications with one or more storage nodes or chassis-level controllers (e.g., the chassis-level controller 112 ) in the distributing computing system 100 .
- the host server 106 may distribute a computing workload among the storage nodes or query the processors of the storage nodes over a data interface to obtain storage node performance degradation information.
- the storage nodes and/or chassis-level controllers can initiate communications with the host server 106 .
- one or more storage nodes and/or chassis-level controllers may inform the host server 106 of a degraded system component in a storage node.
- the host server 106 may take a corrective action, such as refraining from assigning storage tasks to the storage node with the degraded component in the future.
- host server 106 communicates with the storage nodes in a chassis through the chassis-level controller of that chassis. For example, the host server 106 may query the chassis-level controller 112 , rather than the individual storage nodes, to request storage node performance degradation information. In another implementation, the host server 106 queries the chassis-level controller 112 to gain knowledge of actions taken by the chassis-level controller and/or the storage nodes. For example, the host server 106 may query the chassis-level controller 112 and learn that the chassis-level controller 112 has recently altered the speed of a cooling fan to try to reduce the performance degradation observed in the storage node 102 .
- Each processor 122 of each of the storage nodes in the distributed computing system 100 may be communicatively coupled to a chassis-level controller (e.g., the chassis-level controller 112 ), the host server 106 , and/or the processors of some or all of the other storage nodes in the system 100 .
- a chassis-level controller e.g., the chassis-level controller 112
- the host server 106 e.g., the host server 106
- the processors of some or all of the other storage nodes in the system 100 e.g., the chassis-level controller 112
- the storage nodes in a chassis may actively communicate with the chassis-level controller 112 , the host server 106 , and/or other storage nodes in the system to effectuate changes to the system that improve system performance.
- the processor 122 of the storage node 102 is communicatively coupled to the processors of each of the storage nodes located in same chassis (i.e., the top-level chassis 108 on the racks 128 , 130 ).
- the storage node 102 can initiate communications with any number of the other storage nodes located in the same top-level chassis 108 .
- the processor 122 of the storage node 102 is communicatively coupled to the processors of each of the storage nodes in the entire distributed computing system 100 .
- the chassis-level controller of each chassis manages the electronics in the chassis (such as the one or more cooling fans 126 ) and has knowledge of the physical location of each of the storage nodes in the chassis.
- the host server 106 lacks such an understanding of the physical location of each of the storage nodes in the chassis 108 .
- a chassis-level controller may be uniquely suited to troubleshoot and diagnose sources of vibration within a chassis.
- a storage node 102 has an internal accelerometer that may determine directional aspects of vibrations detected in the storage node.
- the processor 122 of the storage node 102 may communicate such information (i.e., information related to the dimensional influence of vibrations) to request that another system component perform a corrective action for decreasing vibration-related performance degradation in the storage node 102 .
- the chassis-level controller of each chassis monitors performance degradation experienced at each of the storage nodes of the chassis, as well as other storage node conditions such as temperature, power supply, voltage, humidity, barometric pressure, etc.
- the chassis-level controller may utilize such information to determine one or more sources of performance degradation for the storage nodes.
- the chassis-level controller is communicatively coupled to the processor in each of the storage nodes in the chassis and also to the host server 106 .
- the chassis-level controller may serve as an intermediary for information transmitted between the host server 106 and the storage nodes.
- the host server 106 may request performance degradation information from the chassis-level controller 112 and the chassis-level controller 112 may then query each of the processors 122 of the storage nodes to obtain such information and relay it back to the host server 106 .
- the chassis-level controller 112 monitors the nodes and periodically reports performance degradation information back to the host server 106 without receiving a query for such information from the host server 106 .
- the chassis-level controller 112 may notify the host server 106 when a storage node is experiencing a high level of performance degradation.
- the host server 106 may react by taking an action to improve system performance, such as by redistributing certain computing tasks or by alerting a system administrator of a persistent problem.
- the chassis-level controller 112 is operable to may receive, react, and/or reply to communications initiated by a processor 122 in one of the storage nodes.
- the processor 122 in the storage node 102 may alert the chassis-level controller 112 of a high level of performance degradation observed in the storage node 102 so that the chassis-level controller 112 can take action to try to reduce the performance degradation (e.g., such as by altering a fan speed in the chassis).
- the chassis-level control 112 may convey an alert message to the host server 106 so that the host server 106 can take some action to try to mitigate the performance degradation.
- the storage nodes communicate directly with the host server 106 without involving the chassis-level controller 112 .
- FIG. 2 illustrates another implementation of a distributed computing system 200 with peer-to-peer vibration mitigation capability.
- the distributed computing system 200 has a central host server 206 that is communicatively coupled by way of a drive operation interface 214 to a plurality of storage nodes (e.g., storage nodes 202 and 204 ) in the distributed computing system 200 .
- Each storage node includes one or more processing units (not shown) communicatively coupled to one or more hard drive assemblies (not shown).
- a chassis-level controller 212 is also communicatively coupled to each of the storage nodes and to the host server 206 by way of the drive operation interface 214 .
- the drive operation interface 214 allows the storage nodes (e.g., the storage nodes 202 , 204 ), the chassis-level controller 212 , and/or the central host server 206 to initiate communications with one or more external components in the distributed computing system 200 .
- the driver operation interface 214 allows a storage node to initiate communications with another storage node, with the chassis-level controller 212 , and/or with the host server 206 .
- the drive operation interface 214 permits the storage nodes to initiate communications with other processing entities in the distributed computing system to communicate real-time performance problems.
- the drive operation interface 214 operates through the same physical connections that are used for a primary data interface of the distributed computing system 200 . That is, the electrical connections through which data is read from and written to the storage nodes may also serve as the conduit for communication from one storage node to another storage node, from one storage node to the host server 206 , from one storage node to the chassis-level controller 212 , etc.
- the drive operation interface 214 is thus a separate logical interface that utilizes the same physical interface as the primary data interface.
- any corrective actions taken by system processing entities (such as the storage node processors or the system level chassis 212 ) may be communicated back up to the host server 206 so that the host is aware of changes in the distributed computing system 200 .
- the drive operation interface 214 operates through a secondary physical interface that is separate from and in addition to the primary data interface of the distributed computing system 200 .
- data is read from and written to the storage nodes over a different physical interface than the drive operation interface 214 .
- the processors in the storage nodes may communicate with one another, with the host server 206 , and/or with the chassis-level controller 212 over a separate physical connection such as an I2C, SAS, SATA, USB, PCle or Ethernet connection.
- the drive operation interface 214 may facilitate communications between storage nodes, between storage nodes and chassis-level controllers, between storage nodes and the host server 206 , between chassis-level controllers and the host server 206 .
- Information communicated over the drive operation interface 214 may include performance-related data for each of the storage nodes in the distributed computing system 200 .
- the storage node 202 may transmit, over the drive operation interface 214 , a message that it is taking longer than expected to complete a storage or computing task.
- the storage node 202 may utilize the drive operation interface 214 to transmit a measurement observed by one or more vibration sensors in storage node 202 .
- the storage node 202 may communicate such information as it relates to observations over the past few days, weeks, months, etc.
- the storage nodes may evaluate and communicate likely reasons for observed vibration and/or performance degradation problems.
- the processor in the storage node 202 determines that it has seen higher than average performance degradation over the past several weeks and reports to the host 206 , over the drive operation interface 214 , that a degraded hard drive disk or other system component is likely the problem.
- a storage node trouble-shoots performance-related problems by seeking out information regarding possible sources of vibration in the storage node's localized environment.
- the processor of the storage node 202 may communicate with the processors of physically adjacent storage nodes to find out what type of tasks are currently being performed on the physically adjacent storage nodes. If it can be determined that one of the physically adjacent nodes is performing a high I/O task, the storage node 202 may determine that its own performance-related problems are attributable to incident vibrations caused by the task in the physically adjacent node.
- the storage nodes can independently take real-time corrective actions to respond to information received over the drive information interface 214 .
- the storage node 202 might detect that it is experiencing a high degree of performance degradation and send out a “distress” cry to other storage nodes in close physical proximity.
- One or more storage nodes that receive the distress cry may respond by adjusting their own behavior in order to decrease vibrations affecting the storage node 202 .
- the storage nodes that receive the distress cry may postpone one or more of their own storage or computing tasks to reduce vibrations incident on the storage node 202 long enough for the storage node 202 to complete a current task.
- the storage nodes are able to redistribute storage and/or computing tasks among themselves over the drive operation interface 214 .
- the storage node 202 may detect that it is having difficulty completing a write operation task and determine that the difficulty is most likely due to a hard drive disk component in need of repair.
- the storage node 202 may utilize the drive operation interface 214 to seek out a storage node to accept transfer of the write operation task. That is, the storage node 202 may utilize the drive operation interface 214 to identify a “free” storage node that is unoccupied or currently performing a low-priority task that can be postponed. Once a free storage node is identified, the storage node 202 can transfer the write operation task to the free storage node.
- the free storage node can then complete the write operation task while temporarily postponing any of its own low-priority tasks.
- the host server 206 may be notified of the transfer to ensure that the host server 206 maintains knowledge of where all data is located in the distributed computing system 200 .
- the chassis-level controller 212 uses information received over the drive operation interface 214 to diagnose and respond to one or more sources of vibration.
- the storage node 202 may send performance degradation information along with operational information, such as voltage or internal storage node temperatures to the chassis-level controller 212 .
- the chassis-level controller 212 can analyze such information to determine which corrective actions are likely to improve performance in the storage node 202 .
- the chassis-level controller 212 may detect that the storage node 202 is warmer than average because of a low voltage causing increased current.
- the chassis-level controller 212 may increase the speed of a nearby cooling fan.
- the chassis-level controller 212 may ask the host server 206 to avoid scheduling all jobs or certain types of jobs in the storage node 202 in the future.
- the chassis-level controller 212 is able to initiate communications with the storage nodes and/or respond to inquiries from the storage nodes over the drive operation interface 214 in order to facilitate adaptive vibration mitigation.
- the storage node 202 may inform the chassis-level controller 212 of the most likely sources of observed vibrations in the storage node 202 and the chassis-level controller 212 may troubleshoot by making changes to electronic generators of mechanical vibration to try to improve performance in the storage node 202 .
- FIG. 3 illustrates example operations of peer-to-peer vibration mitigation in a distributing computing system according to one implementation.
- a detection operation 305 detects performance degradation occurring in a “distressed” storage node.
- a processor of the distressed storage node may detect that a hard drive assembly in the node is taking longer than expected to perform a task or that one or more vibration sensors located in the distressed storage node have recently measured a high degree of vibration.
- a chassis-level controller communicatively coupled to the storage nodes in the distributed computing system might “check” on the distressed node and discover the high degree of performance degradation occurring in the distressed node.
- a communication initiation operation 310 initiates a communication with an external system component over a drive operation interface, the communication requesting a corrective action likely to decrease the performance degradation observed at the distressed node.
- the processor of the distressed storage node initiates the communication operation 310 by sending out a distress cry over the drive operation interface to alert other storage nodes, a system host, and/or the chassis-level controller of the performance degradation.
- the processor of the distressed storage node communicates to a chassis-level controller a degree of vibration detected in the storage node, and the chassis-level controller uses its knowledge of the location of other storage nodes within the distributed computing system 300 to identify sources of vibration.
- the chassis-level controller may communicate a corrective instruction to one or more system components for decreasing the vibration-related performance degradation in the storage node.
- the communication initiation operation 310 is performed by a processor of the chassis-level controller and is a request for additional system information that may assist in diagnosing the sources of the performance degradation in the distressed node.
- the chassis-level controller may communicate with the processors of storage nodes physically adjacent to the distressed storage node and request specifics of the types of tasks currently being performed on those storage nodes.
- a receiving operation 315 receives and processes the initial communication sent by the communication initiation operation 310 .
- the processor performing the receiving operation 315 may respond to the communication.
- a storage node physically adjacent to the distressed node may receive a distress cry from the distressed storage node and respond by providing details of the task currently being performed on the physically adjacent storage node.
- the chassis-level controller initially performs the receiving operation 315 and relays the distress cry of the distressed storage node up to a host server.
- the chassis-level controller performs the receiving operation 315 and responds by altering a state of one or more components in the chassis in order to improve performance of the distributed computing system.
- one or more processing entities in the distributing computing system may execute, via an execution operation 320 , a corrective action in response to the communication to decrease the performance degradation occurring at the distressed node.
- a storage node physically adjacent to the distressed node may receive a distress cry from the distressed node and respond by temporarily halting a high I/O task to decrease vibrations incident on the distressed node.
- the chassis-level controller receives a distress cry from the distressed storage node and responds by affecting a system component, such as by altering the speed of a fan to reduce vibrations in the distressed storage node.
- the chassis-level controller utilizes knowledge of the physical location of each of the drives in a chassis to identify likely sources of vibration in the distressed node and to determine what corrective action is necessary.
- the chassis-level controller relays such a determination to the host server so the host server can take an appropriate corrective action via the execution operation 320 .
- a determination operation 325 determines whether or not the corrective action decreased the performance degradation observed at the distressed node. For example, the processor of the distressed node may detect whether vibrations in the storage node have decreased. If the performance degradation has not decreased, additional communications may be initiated and operations 310 - 325 may be repeated. Thus, additional corrective actions may be executed in attempt to decrease the performance degradation in the distressed storage node.
- FIG. 4 discloses a block diagram of a computer system 400 suitable for implementing one or more aspects of a peer-to-peer vibration mitigation system.
- the computer system 400 is capable of executing a computer program product embodied in a tangible computer-readable storage medium to execute a computer process. Data and program files may be input to the computer system 400 , which reads the files and executes the programs therein using one or more processors.
- Some of the elements of a computer system 400 are shown in FIG. 4 wherein a processor 402 is shown having an input/output (I/O) section 404 , a Central Processing Unit (CPU) 406 , and a memory section 408 .
- I/O input/output
- CPU Central Processing Unit
- processors 402 there may be one or more processors 402 , such that the processor 402 of the computing system 400 comprises a single central-processing unit 406 , or a plurality of processing units.
- the processors may be single core or multi-core processors.
- the computing system 400 may be a conventional computer, a distributed computer, or any other type of computer.
- the described technology is optionally implemented in software loaded in memory 408 , a disc storage unit 412 , and/or communicated via a wired or wireless network link 414 on a carrier signal (e.g., Ethernet, 3G wireless, 4G wireless, LTE (Long Term Evolution)) thereby transforming the computing system 400 in FIG. 4 to a special purpose machine for implementing the described operations.
- a carrier signal e.g., Ethernet, 3G wireless, 4G wireless, LTE (Long Term Evolution)
- the I/O section 404 may be connected to one or more user-interface devices (e.g., a keyboard, a touch-screen display unit 418 , etc.) or a disc storage unit 412 .
- user-interface devices e.g., a keyboard, a touch-screen display unit 418 , etc.
- Computer program products containing mechanisms to effectuate the systems and methods in accordance with the described technology may reside in the memory section 404 or on the storage unit 412 of such a system 400 .
- a communication interface 424 is capable of connecting the computer system 400 to a network via the network link 414 , through which the computer system can receive instructions and data embodied in a carrier wave.
- the computing system 400 When used in a local area networking (LAN) environment, the computing system 400 is connected (by wired connection or wirelessly) to a local network through the communication interface 424 , which is one type of communications device.
- the computing system 400 When used in a wide-area-networking (WAN) environment, the computing system 400 typically includes a modem, a network adapter, or any other type of communications device for establishing communications over the wide area network.
- program modules depicted relative to the computing system 400 or portions thereof may be stored in a remote memory storage device. It is appreciated that the network connections shown are examples of communications devices for and other means of establishing a communications link between the computers may be used.
- the computer system 400 is used to implement a host server having a processor 402 communicatively coupled to a plurality of storage nodes (not shown) and/or one or more chassis-level controllers (not shown).
- the computer system 400 is used to implement a storage node having a processor 402 that is communicatively coupled to processors of other storage nodes, one or more chassis-level controllers, or the host server.
- the computer system 400 is configured to communicate with system storage nodes, a host computer, or a drive-level chassis by way of the communication interface 424 , which may be an Ethernet port, USB connection, or other physical connection such as an I2C, SAS, SATA, or PLe connection.
- Peer to peer vibration mitigation operations and techniques may be embodied by instructions stored in memory 408 and/or the storage unit 412 and executed by the processor 402 .
- local computing systems, remote data sources and/or services, and other associated logic represent firmware, hardware, and/or software that may also be configured to perform such operations.
- any one of the host computer, a chassis-level controller, or a distributed computing storage node may be implemented using a general purpose computer and specialized software (such as a server executing service software), a special purpose computing system and specialized software (such as a mobile device or network appliance executing service software), or other computing configurations.
- program data such as task distribution information, storage node degradation information, and other data may be stored in the memory 408 and/or the storage unit 412 and executed by the processor 402 .
- the implementations described herein are implemented as logical steps in one or more computer systems.
- the logical operations of the present invention are implemented (1) as a sequence of processor-implemented steps executing in one or more computer systems and (2) as interconnected machine or circuit modules within one or more computer systems.
- the implementation is a matter of choice, dependent on the performance requirements of the computer system implementing the invention. Accordingly, the logical operations making up the implementations of the invention described herein are referred to variously as operations, steps, objects, or modules.
- logical operations may be performed in any order, adding and omitting as desired, unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language.
Abstract
Description
- The present application is related to U.S. patent application Ser. No. ______, entitled “Adaptive Vibration Mitigation” and filed concurrently herewith, which is specifically incorporated by reference herein for all that it discloses and teaches.
- Implementations described and claimed herein provide for a physical or logical drive operation interface in a distributed computing and storage environment over which communications may be initiated by storage nodes and/or system-level controllers to effectuate real-time corrections actions that mitigate system vibrations.
- This Summary is provided to introduce an election of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Other features, details, utilities, and advantages of the claimed subject matter will be apparent from the following more particular written Detailed Description of various implementations and implementations as further illustrated in the accompanying drawings and defined in the appended claims.
-
FIG. 1 illustrates one implementation of a distributed computing system with peer-to-peer vibration mitigation capability. -
FIG. 2 illustrates another implementation of a distributed computing system with peer-to-peer vibration mitigation capability. -
FIG. 3 illustrates example operations for peer-to-peer vibration mitigation according to one implementation. -
FIG. 4 discloses a block diagram of a computer system suitable for implementing aspects of at least one implementation of a peer-to-peer vibration mitigation system. - Vibration can be a cause of hard disc drive performance problems, particularly in systems containing multiple disc drives in the same enclosure. Vibrations can be caused by forces including without limitation a drive's own actuator moment, the activity of other drives in a system enclosure, other sources of vibration such as cooling fans, etc.
- In a distributed computing system, a central host server may query or monitor one or more storage node logs to discover information relating to the amount of vibration experienced at each of the storage nodes. However, vibrational problems may be discovered after the fact or go unnoticed entirely if the central host server does not aggressively monitor or query the storage nodes over the primary data interface. Because primary data interfaces are typically very busy, aggressive monitoring of the storage nodes is not always feasible.
- Another problem with existing distributed and computing systems is that primary data interfaces are sometimes inaccessible to certain parts of the system that are able to manage generators of mechanical energy that create system vibrations. For example, storage nodes may be located in a chassis having chassis management electronics. Unlike a remote system host, the chassis management electronics typically have knowledge of the physical locations of the individual storage nodes. However, the chassis management electronics typically lack the ability to communicate with the storage nodes over the primary data interface.
- To address these and potentially other problems, implementations of the systems described herein provide for a secondary communication interface over which the chassis management electronics and/or system storage nodes can initiate communications to effectuate performance-increasing system changes.
-
FIG. 1 illustrates one implementation of adistributed computing system 100 with peer-to-peer vibration mitigation capability. Thedistributed computing system 100 has acentral host server 106 that is communicatively coupled to a plurality of storage nodes (e.g.,storage nodes 102, 104) in thedistributed computing system 100. Each computing node includes one or more processing units (e.g., a processor 122) attached to one or more hard drive assemblies (e.g., an HDA 124). Typically, acooling fan 126 cools one or more storage nodes in thedistributed computing system 100. The HDA 124 of each storage node performs storage-related tasks such as read and write operations, and theprocessor 122 of each storage node is configured to perform storage and/or computing tasks for thedistributed computing system 100. - The HDA 124 typically includes an actuator arm that pivots about an axis of rotation to position a transducer head, located on the distal end of the arm, over a data track on a media disc. The movement of the actuator arm may be controlled by a voice coil motor, and a spindle motor may be used to rotate the media disc below the actuator arm. In operation, rotational vibrations experienced by the HDA 124 can result in unwanted rotation of the actuator arm about the arm's axis of rotation (e.g., in the cross-track direction). When severe enough, this unwanted rotation can knock the transducer head far enough off of a desired data track that a positional correction is required. Such events can contribute to diminished read and/or write performance in the
HDA 124 and thedistributed computing system 100. - Each HDA 124 in the
distributed computing system 100 communicates with at least oneprocessor 122. Theprocessor 122 is able to detect a position of the transducer head of the HDA 124 at any given time based on read sensor signals sent from the transducer head or servo pattern information that is detected by the transducer head and passed to theprocessor 122. Thus, during a reading or writing operation, theprocessor 122 may detect that the drive is not tracking properly and take steps to correct the tracking For example, theprocessor 122 may determine that the transducer head has hit an off-track limit when vibrations cause the transducer head to stray off of a desired data track. In such cases, theprocessor 122 may instruct the drive to halt the current reading or writing operation for one or more rotations of the disc so that the transducer head can be repositioned. - The
processor 122 of each of the storage nodes collects information from theHDAs 124 of each storage node regarding the degree to which theHDAs 124 are impacted by vibration. In one implementation, theprocessor 122 of each storage node measures I/O degradation attributable to vibration occurring at the storage node. In the same or an alternate implementation, theprocessor 122 may record I/O degradation information in one or more log files. For example, theprocessor 122 of a storage node may record computation times related to each task or measurements of one or more vibration sensors in the log file. - The
processor 122 of each of the storage nodes is further configured to communicate performance degradation information to thehost server 106, other storage nodes in the system, and/or other system processing entities such as a chassis-level controller (e.g., the chassis-level controller 112 in chassis 108). As used herein, the term “performance degradation” refers to I/O degradation attributable to system vibrations. Certain factors such as temperature, humidity, and altitude may make a storage node more susceptible to performance degradation. - The storage nodes (e.g.,
storage nodes 102, 104) illustrated inFIG. 1 are distributed across multiple chassis (e.g., a chassis 108) mounted onracks chassis 108 includes multiple storage nodes and a plurality of cooling fans (e.g., fan 126). In one implementation, thecooling fan 126 is positioned immediately behind a vertical stack of three storage nodes. In another implementation, one or more of theprocessors 122 of storage nodes in close physical proximity to thecooling fan 126 controls thecooling fan 126. In yet another implementation, aprocessor 122 of the chassis-level controller 112 controls thecooling fan 126. - In various implementations, the storage nodes may be distributed in a variety of configurations employing any number of racks, chassis, or fans. In at least one implementation, the
distributed computing system 100 includes storage nodes at two separate physical locations (for example, in different facilities). In another implementation, each chassis (e.g., the chassis 108) includes one or more temperature, humidity, or GPS sensors. - In the example illustrated by
FIG. 1 , thehost server 106 resides on a single, rack-mountedcomputer server 120. However, in other implementations, thehost server 106 may be distributed across one or more of theprocessors 122 of the storage nodes or across other processors or other systems of processors. Thehost server 106 may be communicatively coupled to the processors of the storage nodes and/or processors of one more of the chassis-level controllers. - In one implementation, the
host server 106 has the ability to initiate, receive, and/or respond to communications with one or more storage nodes or chassis-level controllers (e.g., the chassis-level controller 112) in the distributingcomputing system 100. For example, thehost server 106 may distribute a computing workload among the storage nodes or query the processors of the storage nodes over a data interface to obtain storage node performance degradation information. - In another implementation, the storage nodes and/or chassis-level controllers can initiate communications with the
host server 106. For example, one or more storage nodes and/or chassis-level controllers may inform thehost server 106 of a degraded system component in a storage node. Here, thehost server 106 may take a corrective action, such as refraining from assigning storage tasks to the storage node with the degraded component in the future. - In another implementation,
host server 106 communicates with the storage nodes in a chassis through the chassis-level controller of that chassis. For example, thehost server 106 may query the chassis-level controller 112, rather than the individual storage nodes, to request storage node performance degradation information. In another implementation, thehost server 106 queries the chassis-level controller 112 to gain knowledge of actions taken by the chassis-level controller and/or the storage nodes. For example, thehost server 106 may query the chassis-level controller 112 and learn that the chassis-level controller 112 has recently altered the speed of a cooling fan to try to reduce the performance degradation observed in thestorage node 102. - Each
processor 122 of each of the storage nodes in the distributedcomputing system 100 may be communicatively coupled to a chassis-level controller (e.g., the chassis-level controller 112), thehost server 106, and/or the processors of some or all of the other storage nodes in thesystem 100. Thus, the storage nodes in a chassis may actively communicate with the chassis-level controller 112, thehost server 106, and/or other storage nodes in the system to effectuate changes to the system that improve system performance. - In one example implementation, the
processor 122 of thestorage node 102 is communicatively coupled to the processors of each of the storage nodes located in same chassis (i.e., the top-level chassis 108 on theracks 128, 130). Here, thestorage node 102 can initiate communications with any number of the other storage nodes located in the same top-level chassis 108. In yet another implementation, theprocessor 122 of thestorage node 102 is communicatively coupled to the processors of each of the storage nodes in the entire distributedcomputing system 100. - The chassis-level controller of each chassis manages the electronics in the chassis (such as the one or more cooling fans 126) and has knowledge of the physical location of each of the storage nodes in the chassis. In at least one implementation, the
host server 106 lacks such an understanding of the physical location of each of the storage nodes in thechassis 108. Thus, a chassis-level controller may be uniquely suited to troubleshoot and diagnose sources of vibration within a chassis. - In one implementation, a
storage node 102 has an internal accelerometer that may determine directional aspects of vibrations detected in the storage node. Here, theprocessor 122 of thestorage node 102 may communicate such information (i.e., information related to the dimensional influence of vibrations) to request that another system component perform a corrective action for decreasing vibration-related performance degradation in thestorage node 102. - In one implementation, the chassis-level controller of each chassis monitors performance degradation experienced at each of the storage nodes of the chassis, as well as other storage node conditions such as temperature, power supply, voltage, humidity, barometric pressure, etc. The chassis-level controller may utilize such information to determine one or more sources of performance degradation for the storage nodes.
- In one implementation, the chassis-level controller is communicatively coupled to the processor in each of the storage nodes in the chassis and also to the
host server 106. Thus, the chassis-level controller may serve as an intermediary for information transmitted between thehost server 106 and the storage nodes. For example, thehost server 106 may request performance degradation information from the chassis-level controller 112 and the chassis-level controller 112 may then query each of theprocessors 122 of the storage nodes to obtain such information and relay it back to thehost server 106. - In yet another implementation, the chassis-
level controller 112 monitors the nodes and periodically reports performance degradation information back to thehost server 106 without receiving a query for such information from thehost server 106. For example, the chassis-level controller 112 may notify thehost server 106 when a storage node is experiencing a high level of performance degradation. Thehost server 106 may react by taking an action to improve system performance, such as by redistributing certain computing tasks or by alerting a system administrator of a persistent problem. - In the same or an alternate implementation, the chassis-
level controller 112 is operable to may receive, react, and/or reply to communications initiated by aprocessor 122 in one of the storage nodes. For example, theprocessor 122 in thestorage node 102 may alert the chassis-level controller 112 of a high level of performance degradation observed in thestorage node 102 so that the chassis-level controller 112 can take action to try to reduce the performance degradation (e.g., such as by altering a fan speed in the chassis). Alternatively, the chassis-level control 112 may convey an alert message to thehost server 106 so that thehost server 106 can take some action to try to mitigate the performance degradation. In another implementation, the storage nodes communicate directly with thehost server 106 without involving the chassis-level controller 112. -
FIG. 2 illustrates another implementation of a distributedcomputing system 200 with peer-to-peer vibration mitigation capability. The distributedcomputing system 200 has acentral host server 206 that is communicatively coupled by way of adrive operation interface 214 to a plurality of storage nodes (e.g.,storage nodes 202 and 204) in the distributedcomputing system 200. Each storage node includes one or more processing units (not shown) communicatively coupled to one or more hard drive assemblies (not shown). A chassis-level controller 212 is also communicatively coupled to each of the storage nodes and to thehost server 206 by way of thedrive operation interface 214. - The
drive operation interface 214 allows the storage nodes (e.g., thestorage nodes 202, 204), the chassis-level controller 212, and/or thecentral host server 206 to initiate communications with one or more external components in the distributedcomputing system 200. For example, thedriver operation interface 214 allows a storage node to initiate communications with another storage node, with the chassis-level controller 212, and/or with thehost server 206. In contrast with traditional distributed computing systems where thehost server 206 monitors system performance by periodically making queries to the processors of the storage nodes, thedrive operation interface 214 permits the storage nodes to initiate communications with other processing entities in the distributed computing system to communicate real-time performance problems. - In one implementation, the
drive operation interface 214 operates through the same physical connections that are used for a primary data interface of the distributedcomputing system 200. That is, the electrical connections through which data is read from and written to the storage nodes may also serve as the conduit for communication from one storage node to another storage node, from one storage node to thehost server 206, from one storage node to the chassis-level controller 212, etc. In such an implementation, thedrive operation interface 214 is thus a separate logical interface that utilizes the same physical interface as the primary data interface. Here, any corrective actions taken by system processing entities (such as the storage node processors or the system level chassis 212) may be communicated back up to thehost server 206 so that the host is aware of changes in the distributedcomputing system 200. - In another implementation, the
drive operation interface 214 operates through a secondary physical interface that is separate from and in addition to the primary data interface of the distributedcomputing system 200. Here, data is read from and written to the storage nodes over a different physical interface than thedrive operation interface 214. For example, the processors in the storage nodes may communicate with one another, with thehost server 206, and/or with the chassis-level controller 212 over a separate physical connection such as an I2C, SAS, SATA, USB, PCle or Ethernet connection. - The
drive operation interface 214 may facilitate communications between storage nodes, between storage nodes and chassis-level controllers, between storage nodes and thehost server 206, between chassis-level controllers and thehost server 206. - Information communicated over the
drive operation interface 214 may include performance-related data for each of the storage nodes in the distributedcomputing system 200. For example, thestorage node 202 may transmit, over thedrive operation interface 214, a message that it is taking longer than expected to complete a storage or computing task. Alternatively, thestorage node 202 may utilize thedrive operation interface 214 to transmit a measurement observed by one or more vibration sensors instorage node 202. Thestorage node 202 may communicate such information as it relates to observations over the past few days, weeks, months, etc. - In addition to communicating the amount of vibration and/or performance degradation experienced at a storage node, the storage nodes may evaluate and communicate likely reasons for observed vibration and/or performance degradation problems. In one example implementation, the processor in the
storage node 202 determines that it has seen higher than average performance degradation over the past several weeks and reports to thehost 206, over thedrive operation interface 214, that a degraded hard drive disk or other system component is likely the problem. - In another implementation, a storage node trouble-shoots performance-related problems by seeking out information regarding possible sources of vibration in the storage node's localized environment. For example, the processor of the
storage node 202 may communicate with the processors of physically adjacent storage nodes to find out what type of tasks are currently being performed on the physically adjacent storage nodes. If it can be determined that one of the physically adjacent nodes is performing a high I/O task, thestorage node 202 may determine that its own performance-related problems are attributable to incident vibrations caused by the task in the physically adjacent node. - In at least one implementation, the storage nodes can independently take real-time corrective actions to respond to information received over the
drive information interface 214. For example, thestorage node 202 might detect that it is experiencing a high degree of performance degradation and send out a “distress” cry to other storage nodes in close physical proximity. One or more storage nodes that receive the distress cry may respond by adjusting their own behavior in order to decrease vibrations affecting thestorage node 202. Specifically, the storage nodes that receive the distress cry may postpone one or more of their own storage or computing tasks to reduce vibrations incident on thestorage node 202 long enough for thestorage node 202 to complete a current task. - In yet another implementation, the storage nodes are able to redistribute storage and/or computing tasks among themselves over the
drive operation interface 214. For example, thestorage node 202 may detect that it is having difficulty completing a write operation task and determine that the difficulty is most likely due to a hard drive disk component in need of repair. Here, thestorage node 202 may utilize thedrive operation interface 214 to seek out a storage node to accept transfer of the write operation task. That is, thestorage node 202 may utilize thedrive operation interface 214 to identify a “free” storage node that is unoccupied or currently performing a low-priority task that can be postponed. Once a free storage node is identified, thestorage node 202 can transfer the write operation task to the free storage node. The free storage node can then complete the write operation task while temporarily postponing any of its own low-priority tasks. In such cases where a task is transferred from one storage node to another, thehost server 206 may be notified of the transfer to ensure that thehost server 206 maintains knowledge of where all data is located in the distributedcomputing system 200. - In some implementations, the chassis-
level controller 212 uses information received over thedrive operation interface 214 to diagnose and respond to one or more sources of vibration. For example, thestorage node 202 may send performance degradation information along with operational information, such as voltage or internal storage node temperatures to the chassis-level controller 212. Here, the chassis-level controller 212 can analyze such information to determine which corrective actions are likely to improve performance in thestorage node 202. For instance, the chassis-level controller 212 may detect that thestorage node 202 is warmer than average because of a low voltage causing increased current. In response, the chassis-level controller 212 may increase the speed of a nearby cooling fan. Alternatively, the chassis-level controller 212 may ask thehost server 206 to avoid scheduling all jobs or certain types of jobs in thestorage node 202 in the future. - In another implementation, the chassis-
level controller 212 is able to initiate communications with the storage nodes and/or respond to inquiries from the storage nodes over thedrive operation interface 214 in order to facilitate adaptive vibration mitigation. For example, thestorage node 202 may inform the chassis-level controller 212 of the most likely sources of observed vibrations in thestorage node 202 and the chassis-level controller 212 may troubleshoot by making changes to electronic generators of mechanical vibration to try to improve performance in thestorage node 202. -
FIG. 3 illustrates example operations of peer-to-peer vibration mitigation in a distributing computing system according to one implementation. Adetection operation 305 detects performance degradation occurring in a “distressed” storage node. For example, a processor of the distressed storage node may detect that a hard drive assembly in the node is taking longer than expected to perform a task or that one or more vibration sensors located in the distressed storage node have recently measured a high degree of vibration. Alternatively, a chassis-level controller communicatively coupled to the storage nodes in the distributed computing system might “check” on the distressed node and discover the high degree of performance degradation occurring in the distressed node. - After the
detection operation 305, acommunication initiation operation 310 initiates a communication with an external system component over a drive operation interface, the communication requesting a corrective action likely to decrease the performance degradation observed at the distressed node. In one implementation, the processor of the distressed storage node initiates thecommunication operation 310 by sending out a distress cry over the drive operation interface to alert other storage nodes, a system host, and/or the chassis-level controller of the performance degradation. In another implementation, the processor of the distressed storage node communicates to a chassis-level controller a degree of vibration detected in the storage node, and the chassis-level controller uses its knowledge of the location of other storage nodes within the distributedcomputing system 300 to identify sources of vibration. Here, the chassis-level controller may communicate a corrective instruction to one or more system components for decreasing the vibration-related performance degradation in the storage node. - In another implementation, the
communication initiation operation 310 is performed by a processor of the chassis-level controller and is a request for additional system information that may assist in diagnosing the sources of the performance degradation in the distressed node. For example, the chassis-level controller may communicate with the processors of storage nodes physically adjacent to the distressed storage node and request specifics of the types of tasks currently being performed on those storage nodes. - A receiving
operation 315 receives and processes the initial communication sent by thecommunication initiation operation 310. Optionally, the processor performing the receiving operation 315 (such as a processor of a storage node, the chassis-level control, or of a central host server) may respond to the communication. For example, a storage node physically adjacent to the distressed node may receive a distress cry from the distressed storage node and respond by providing details of the task currently being performed on the physically adjacent storage node. In yet another implementation, the chassis-level controller initially performs the receivingoperation 315 and relays the distress cry of the distressed storage node up to a host server. In another implementation, the chassis-level controller performs the receivingoperation 315 and responds by altering a state of one or more components in the chassis in order to improve performance of the distributed computing system. - After the initial communication is received, one or more processing entities in the distributing computing system may execute, via an
execution operation 320, a corrective action in response to the communication to decrease the performance degradation occurring at the distressed node. For example, a storage node physically adjacent to the distressed node may receive a distress cry from the distressed node and respond by temporarily halting a high I/O task to decrease vibrations incident on the distressed node. - In another implementation, the chassis-level controller receives a distress cry from the distressed storage node and responds by affecting a system component, such as by altering the speed of a fan to reduce vibrations in the distressed storage node. In yet another implementation, the chassis-level controller utilizes knowledge of the physical location of each of the drives in a chassis to identify likely sources of vibration in the distressed node and to determine what corrective action is necessary. In one implementation, the chassis-level controller relays such a determination to the host server so the host server can take an appropriate corrective action via the
execution operation 320. - After a corrective action has been executed, a
determination operation 325 determines whether or not the corrective action decreased the performance degradation observed at the distressed node. For example, the processor of the distressed node may detect whether vibrations in the storage node have decreased. If the performance degradation has not decreased, additional communications may be initiated and operations 310-325 may be repeated. Thus, additional corrective actions may be executed in attempt to decrease the performance degradation in the distressed storage node. -
FIG. 4 discloses a block diagram of acomputer system 400 suitable for implementing one or more aspects of a peer-to-peer vibration mitigation system. Thecomputer system 400 is capable of executing a computer program product embodied in a tangible computer-readable storage medium to execute a computer process. Data and program files may be input to thecomputer system 400, which reads the files and executes the programs therein using one or more processors. Some of the elements of acomputer system 400 are shown inFIG. 4 wherein aprocessor 402 is shown having an input/output (I/O)section 404, a Central Processing Unit (CPU) 406, and amemory section 408. There may be one ormore processors 402, such that theprocessor 402 of thecomputing system 400 comprises a single central-processing unit 406, or a plurality of processing units. The processors may be single core or multi-core processors. Thecomputing system 400 may be a conventional computer, a distributed computer, or any other type of computer. The described technology is optionally implemented in software loaded inmemory 408, adisc storage unit 412, and/or communicated via a wired orwireless network link 414 on a carrier signal (e.g., Ethernet, 3G wireless, 4G wireless, LTE (Long Term Evolution)) thereby transforming thecomputing system 400 inFIG. 4 to a special purpose machine for implementing the described operations. - The I/
O section 404 may be connected to one or more user-interface devices (e.g., a keyboard, a touch-screen display unit 418, etc.) or adisc storage unit 412. Computer program products containing mechanisms to effectuate the systems and methods in accordance with the described technology may reside in thememory section 404 or on thestorage unit 412 of such asystem 400. - A
communication interface 424 is capable of connecting thecomputer system 400 to a network via thenetwork link 414, through which the computer system can receive instructions and data embodied in a carrier wave. When used in a local area networking (LAN) environment, thecomputing system 400 is connected (by wired connection or wirelessly) to a local network through thecommunication interface 424, which is one type of communications device. When used in a wide-area-networking (WAN) environment, thecomputing system 400 typically includes a modem, a network adapter, or any other type of communications device for establishing communications over the wide area network. In a networked environment, program modules depicted relative to thecomputing system 400 or portions thereof, may be stored in a remote memory storage device. It is appreciated that the network connections shown are examples of communications devices for and other means of establishing a communications link between the computers may be used. - In one implementation, the
computer system 400 is used to implement a host server having aprocessor 402 communicatively coupled to a plurality of storage nodes (not shown) and/or one or more chassis-level controllers (not shown). In another implementation, thecomputer system 400 is used to implement a storage node having aprocessor 402 that is communicatively coupled to processors of other storage nodes, one or more chassis-level controllers, or the host server. In yet another example implementation, thecomputer system 400 is configured to communicate with system storage nodes, a host computer, or a drive-level chassis by way of thecommunication interface 424, which may be an Ethernet port, USB connection, or other physical connection such as an I2C, SAS, SATA, or PLe connection. - Peer to peer vibration mitigation operations and techniques may be embodied by instructions stored in
memory 408 and/or thestorage unit 412 and executed by theprocessor 402. Further, local computing systems, remote data sources and/or services, and other associated logic represent firmware, hardware, and/or software that may also be configured to perform such operations. Further, any one of the host computer, a chassis-level controller, or a distributed computing storage node may be implemented using a general purpose computer and specialized software (such as a server executing service software), a special purpose computing system and specialized software (such as a mobile device or network appliance executing service software), or other computing configurations. In addition, program data, such as task distribution information, storage node degradation information, and other data may be stored in thememory 408 and/or thestorage unit 412 and executed by theprocessor 402. - It is not necessary for all of the devices shown in
FIG. 4 to be present to practice an implementation. Furthermore, the devices and subsystems may be interconnected in different ways from that shown inFIG. 4 . The implementations described herein are implemented as logical steps in one or more computer systems. The logical operations of the present invention are implemented (1) as a sequence of processor-implemented steps executing in one or more computer systems and (2) as interconnected machine or circuit modules within one or more computer systems. The implementation is a matter of choice, dependent on the performance requirements of the computer system implementing the invention. Accordingly, the logical operations making up the implementations of the invention described herein are referred to variously as operations, steps, objects, or modules. Furthermore, it should be understood that logical operations may be performed in any order, adding and omitting as desired, unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language. - The above specification, examples, and data provide a complete description of the structure and use of exemplary implementations of the invention. Since many implementations of the invention can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended.
Claims (25)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/788,548 US8824261B1 (en) | 2013-03-07 | 2013-03-07 | Peer to peer vibration mitigation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/788,548 US8824261B1 (en) | 2013-03-07 | 2013-03-07 | Peer to peer vibration mitigation |
Publications (2)
Publication Number | Publication Date |
---|---|
US8824261B1 US8824261B1 (en) | 2014-09-02 |
US20140254343A1 true US20140254343A1 (en) | 2014-09-11 |
Family
ID=51400028
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/788,548 Active US8824261B1 (en) | 2013-03-07 | 2013-03-07 | Peer to peer vibration mitigation |
Country Status (1)
Country | Link |
---|---|
US (1) | US8824261B1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140344531A1 (en) * | 2013-05-15 | 2014-11-20 | Amazon Technologies, Inc. | Reducing interference through controlled data access |
US11119669B2 (en) * | 2017-08-02 | 2021-09-14 | Seagate Technology Llc | External indicators for adaptive in-field recalibration |
Families Citing this family (139)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11614893B2 (en) | 2010-09-15 | 2023-03-28 | Pure Storage, Inc. | Optimizing storage device access based on latency |
US8589640B2 (en) | 2011-10-14 | 2013-11-19 | Pure Storage, Inc. | Method for maintaining multiple fingerprint tables in a deduplicating storage system |
US9336174B1 (en) * | 2013-07-29 | 2016-05-10 | Cisco Technology, Inc. | Dynamic interface model |
US9739685B2 (en) | 2014-04-15 | 2017-08-22 | International Business Machines Corporation | Integrated, predictive vibration analysis of rotational machine within electronics rack |
US9836234B2 (en) | 2014-06-04 | 2017-12-05 | Pure Storage, Inc. | Storage cluster |
US11960371B2 (en) | 2014-06-04 | 2024-04-16 | Pure Storage, Inc. | Message persistence in a zoned system |
US10574754B1 (en) | 2014-06-04 | 2020-02-25 | Pure Storage, Inc. | Multi-chassis array with multi-level load balancing |
US9612952B2 (en) | 2014-06-04 | 2017-04-04 | Pure Storage, Inc. | Automatically reconfiguring a storage memory topology |
US9003144B1 (en) | 2014-06-04 | 2015-04-07 | Pure Storage, Inc. | Mechanism for persisting messages in a storage system |
US8850108B1 (en) | 2014-06-04 | 2014-09-30 | Pure Storage, Inc. | Storage cluster |
US9367243B1 (en) | 2014-06-04 | 2016-06-14 | Pure Storage, Inc. | Scalable non-uniform storage sizes |
US11399063B2 (en) | 2014-06-04 | 2022-07-26 | Pure Storage, Inc. | Network authentication for a storage system |
US9218244B1 (en) | 2014-06-04 | 2015-12-22 | Pure Storage, Inc. | Rebuilding data across storage nodes |
US9213485B1 (en) * | 2014-06-04 | 2015-12-15 | Pure Storage, Inc. | Storage system architecture |
US11652884B2 (en) | 2014-06-04 | 2023-05-16 | Pure Storage, Inc. | Customized hash algorithms |
US11068363B1 (en) | 2014-06-04 | 2021-07-20 | Pure Storage, Inc. | Proactively rebuilding data in a storage cluster |
US11604598B2 (en) | 2014-07-02 | 2023-03-14 | Pure Storage, Inc. | Storage cluster with zoned drives |
US9021297B1 (en) | 2014-07-02 | 2015-04-28 | Pure Storage, Inc. | Redundant, fault-tolerant, distributed remote procedure call cache in a storage system |
US11886308B2 (en) | 2014-07-02 | 2024-01-30 | Pure Storage, Inc. | Dual class of service for unified file and object messaging |
US10114757B2 (en) | 2014-07-02 | 2018-10-30 | Pure Storage, Inc. | Nonrepeating identifiers in an address space of a non-volatile solid-state storage |
US8868825B1 (en) | 2014-07-02 | 2014-10-21 | Pure Storage, Inc. | Nonrepeating identifiers in an address space of a non-volatile solid-state storage |
US9836245B2 (en) | 2014-07-02 | 2017-12-05 | Pure Storage, Inc. | Non-volatile RAM and flash memory in a non-volatile solid-state storage |
US9811677B2 (en) | 2014-07-03 | 2017-11-07 | Pure Storage, Inc. | Secure data replication in a storage grid |
US8874836B1 (en) | 2014-07-03 | 2014-10-28 | Pure Storage, Inc. | Scheduling policy for queues in a non-volatile solid-state storage |
US9747229B1 (en) | 2014-07-03 | 2017-08-29 | Pure Storage, Inc. | Self-describing data format for DMA in a non-volatile solid-state storage |
US10853311B1 (en) | 2014-07-03 | 2020-12-01 | Pure Storage, Inc. | Administration through files in a storage system |
US9082512B1 (en) | 2014-08-07 | 2015-07-14 | Pure Storage, Inc. | Die-level monitoring in a storage cluster |
US9495255B2 (en) | 2014-08-07 | 2016-11-15 | Pure Storage, Inc. | Error recovery in a storage cluster |
US9558069B2 (en) | 2014-08-07 | 2017-01-31 | Pure Storage, Inc. | Failure mapping in a storage array |
US9483346B2 (en) | 2014-08-07 | 2016-11-01 | Pure Storage, Inc. | Data rebuild on feedback from a queue in a non-volatile solid-state storage |
US10983859B2 (en) | 2014-08-07 | 2021-04-20 | Pure Storage, Inc. | Adjustable error correction based on memory health in a storage unit |
US9766972B2 (en) | 2014-08-07 | 2017-09-19 | Pure Storage, Inc. | Masking defective bits in a storage array |
US10079711B1 (en) | 2014-08-20 | 2018-09-18 | Pure Storage, Inc. | Virtual file server with preserved MAC address |
US9948615B1 (en) | 2015-03-16 | 2018-04-17 | Pure Storage, Inc. | Increased storage unit encryption based on loss of trust |
US11294893B2 (en) | 2015-03-20 | 2022-04-05 | Pure Storage, Inc. | Aggregation of queries |
US9940234B2 (en) | 2015-03-26 | 2018-04-10 | Pure Storage, Inc. | Aggressive data deduplication using lazy garbage collection |
US10082985B2 (en) * | 2015-03-27 | 2018-09-25 | Pure Storage, Inc. | Data striping across storage nodes that are assigned to multiple logical arrays |
US10178169B2 (en) | 2015-04-09 | 2019-01-08 | Pure Storage, Inc. | Point to point based backend communication layer for storage processing |
US9672125B2 (en) | 2015-04-10 | 2017-06-06 | Pure Storage, Inc. | Ability to partition an array into two or more logical arrays with independently running software |
US9965369B2 (en) | 2015-04-28 | 2018-05-08 | Viasat, Inc. | Self-organized storage nodes for distributed delivery network |
US10140149B1 (en) | 2015-05-19 | 2018-11-27 | Pure Storage, Inc. | Transactional commits with hardware assists in remote memory |
US9817576B2 (en) | 2015-05-27 | 2017-11-14 | Pure Storage, Inc. | Parallel update to NVRAM |
US10846275B2 (en) | 2015-06-26 | 2020-11-24 | Pure Storage, Inc. | Key management in a storage device |
US10983732B2 (en) | 2015-07-13 | 2021-04-20 | Pure Storage, Inc. | Method and system for accessing a file |
US11232079B2 (en) | 2015-07-16 | 2022-01-25 | Pure Storage, Inc. | Efficient distribution of large directories |
US10108355B2 (en) | 2015-09-01 | 2018-10-23 | Pure Storage, Inc. | Erase block state detection |
US11341136B2 (en) | 2015-09-04 | 2022-05-24 | Pure Storage, Inc. | Dynamically resizable structures for approximate membership queries |
US10853266B2 (en) | 2015-09-30 | 2020-12-01 | Pure Storage, Inc. | Hardware assisted data lookup methods |
US10762069B2 (en) | 2015-09-30 | 2020-09-01 | Pure Storage, Inc. | Mechanism for a system where data and metadata are located closely together |
US9768953B2 (en) | 2015-09-30 | 2017-09-19 | Pure Storage, Inc. | Resharing of a split secret |
US9843453B2 (en) | 2015-10-23 | 2017-12-12 | Pure Storage, Inc. | Authorizing I/O commands with I/O tokens |
US9755979B2 (en) | 2015-11-19 | 2017-09-05 | Viasat, Inc. | Enhancing capacity of a direct communication link |
US10007457B2 (en) | 2015-12-22 | 2018-06-26 | Pure Storage, Inc. | Distributed transactions with token-associated execution |
US9843941B2 (en) * | 2015-12-31 | 2017-12-12 | Kodacloud Inc. | Delaying execution of a corrective action in a wireless environment |
US10261690B1 (en) | 2016-05-03 | 2019-04-16 | Pure Storage, Inc. | Systems and methods for operating a storage system |
US11861188B2 (en) | 2016-07-19 | 2024-01-02 | Pure Storage, Inc. | System having modular accelerators |
US9672905B1 (en) | 2016-07-22 | 2017-06-06 | Pure Storage, Inc. | Optimize data protection layouts based on distributed flash wear leveling |
US10768819B2 (en) | 2016-07-22 | 2020-09-08 | Pure Storage, Inc. | Hardware support for non-disruptive upgrades |
US11449232B1 (en) | 2016-07-22 | 2022-09-20 | Pure Storage, Inc. | Optimal scheduling of flash operations |
US11604690B2 (en) | 2016-07-24 | 2023-03-14 | Pure Storage, Inc. | Online failure span determination |
US10216420B1 (en) | 2016-07-24 | 2019-02-26 | Pure Storage, Inc. | Calibration of flash channels in SSD |
US11080155B2 (en) | 2016-07-24 | 2021-08-03 | Pure Storage, Inc. | Identifying error types among flash memory |
US11734169B2 (en) | 2016-07-26 | 2023-08-22 | Pure Storage, Inc. | Optimizing spool and memory space management |
US11797212B2 (en) | 2016-07-26 | 2023-10-24 | Pure Storage, Inc. | Data migration for zoned drives |
US11886334B2 (en) | 2016-07-26 | 2024-01-30 | Pure Storage, Inc. | Optimizing spool and memory space management |
US10203903B2 (en) | 2016-07-26 | 2019-02-12 | Pure Storage, Inc. | Geometry based, space aware shelf/writegroup evacuation |
US10366004B2 (en) | 2016-07-26 | 2019-07-30 | Pure Storage, Inc. | Storage system with elective garbage collection to reduce flash contention |
US11422719B2 (en) | 2016-09-15 | 2022-08-23 | Pure Storage, Inc. | Distributed file deletion and truncation |
US10756816B1 (en) | 2016-10-04 | 2020-08-25 | Pure Storage, Inc. | Optimized fibre channel and non-volatile memory express access |
US9747039B1 (en) | 2016-10-04 | 2017-08-29 | Pure Storage, Inc. | Reservations over multiple paths on NVMe over fabrics |
US11550481B2 (en) | 2016-12-19 | 2023-01-10 | Pure Storage, Inc. | Efficiently writing data in a zoned drive storage system |
US11307998B2 (en) | 2017-01-09 | 2022-04-19 | Pure Storage, Inc. | Storage efficiency of encrypted host system data |
US9747158B1 (en) | 2017-01-13 | 2017-08-29 | Pure Storage, Inc. | Intelligent refresh of 3D NAND |
US11955187B2 (en) | 2017-01-13 | 2024-04-09 | Pure Storage, Inc. | Refresh of differing capacity NAND |
US10979223B2 (en) | 2017-01-31 | 2021-04-13 | Pure Storage, Inc. | Separate encryption for a solid-state drive |
US10528488B1 (en) | 2017-03-30 | 2020-01-07 | Pure Storage, Inc. | Efficient name coding |
US11016667B1 (en) | 2017-04-05 | 2021-05-25 | Pure Storage, Inc. | Efficient mapping for LUNs in storage memory with holes in address space |
US10516645B1 (en) | 2017-04-27 | 2019-12-24 | Pure Storage, Inc. | Address resolution broadcasting in a networked device |
US10141050B1 (en) | 2017-04-27 | 2018-11-27 | Pure Storage, Inc. | Page writes for triple level cell flash memory |
US10944671B2 (en) | 2017-04-27 | 2021-03-09 | Pure Storage, Inc. | Efficient data forwarding in a networked device |
US11467913B1 (en) | 2017-06-07 | 2022-10-11 | Pure Storage, Inc. | Snapshots with crash consistency in a storage system |
US11782625B2 (en) | 2017-06-11 | 2023-10-10 | Pure Storage, Inc. | Heterogeneity supportive resiliency groups |
US11947814B2 (en) | 2017-06-11 | 2024-04-02 | Pure Storage, Inc. | Optimizing resiliency group formation stability |
US11138103B1 (en) | 2017-06-11 | 2021-10-05 | Pure Storage, Inc. | Resiliency groups |
US10425473B1 (en) | 2017-07-03 | 2019-09-24 | Pure Storage, Inc. | Stateful connection reset in a storage cluster with a stateless load balancer |
US10402266B1 (en) | 2017-07-31 | 2019-09-03 | Pure Storage, Inc. | Redundant array of independent disks in a direct-mapped flash storage system |
US10877827B2 (en) | 2017-09-15 | 2020-12-29 | Pure Storage, Inc. | Read voltage optimization |
US10210926B1 (en) | 2017-09-15 | 2019-02-19 | Pure Storage, Inc. | Tracking of optimum read voltage thresholds in nand flash devices |
US10884919B2 (en) | 2017-10-31 | 2021-01-05 | Pure Storage, Inc. | Memory management in a storage system |
US11024390B1 (en) | 2017-10-31 | 2021-06-01 | Pure Storage, Inc. | Overlapping RAID groups |
US10496330B1 (en) | 2017-10-31 | 2019-12-03 | Pure Storage, Inc. | Using flash storage devices with different sized erase blocks |
US10515701B1 (en) | 2017-10-31 | 2019-12-24 | Pure Storage, Inc. | Overlapping raid groups |
US10545687B1 (en) | 2017-10-31 | 2020-01-28 | Pure Storage, Inc. | Data rebuild when changing erase block sizes during drive replacement |
US10860475B1 (en) | 2017-11-17 | 2020-12-08 | Pure Storage, Inc. | Hybrid flash translation layer |
US10990566B1 (en) | 2017-11-20 | 2021-04-27 | Pure Storage, Inc. | Persistent file locks in a storage system |
US10719265B1 (en) | 2017-12-08 | 2020-07-21 | Pure Storage, Inc. | Centralized, quorum-aware handling of device reservation requests in a storage system |
US10929053B2 (en) | 2017-12-08 | 2021-02-23 | Pure Storage, Inc. | Safe destructive actions on drives |
US10929031B2 (en) | 2017-12-21 | 2021-02-23 | Pure Storage, Inc. | Maximizing data reduction in a partially encrypted volume |
US10733053B1 (en) | 2018-01-31 | 2020-08-04 | Pure Storage, Inc. | Disaster recovery for high-bandwidth distributed archives |
US10976948B1 (en) | 2018-01-31 | 2021-04-13 | Pure Storage, Inc. | Cluster expansion mechanism |
US10467527B1 (en) | 2018-01-31 | 2019-11-05 | Pure Storage, Inc. | Method and apparatus for artificial intelligence acceleration |
US11036596B1 (en) | 2018-02-18 | 2021-06-15 | Pure Storage, Inc. | System for delaying acknowledgements on open NAND locations until durability has been confirmed |
US11494109B1 (en) | 2018-02-22 | 2022-11-08 | Pure Storage, Inc. | Erase block trimming for heterogenous flash memory storage devices |
US10853146B1 (en) | 2018-04-27 | 2020-12-01 | Pure Storage, Inc. | Efficient data forwarding in a networked device |
US11385792B2 (en) | 2018-04-27 | 2022-07-12 | Pure Storage, Inc. | High availability controller pair transitioning |
US10931450B1 (en) | 2018-04-27 | 2021-02-23 | Pure Storage, Inc. | Distributed, lock-free 2-phase commit of secret shares using multiple stateless controllers |
US11436023B2 (en) | 2018-05-31 | 2022-09-06 | Pure Storage, Inc. | Mechanism for updating host file system and flash translation layer based on underlying NAND technology |
US11438279B2 (en) | 2018-07-23 | 2022-09-06 | Pure Storage, Inc. | Non-disruptive conversion of a clustered service from single-chassis to multi-chassis |
TWI682320B (en) * | 2018-08-17 | 2020-01-11 | 緯穎科技服務股份有限公司 | Control method for data storage system, data storage module, and computer program product |
US11500570B2 (en) | 2018-09-06 | 2022-11-15 | Pure Storage, Inc. | Efficient relocation of data utilizing different programming modes |
US11520514B2 (en) | 2018-09-06 | 2022-12-06 | Pure Storage, Inc. | Optimized relocation of data based on data characteristics |
US11868309B2 (en) | 2018-09-06 | 2024-01-09 | Pure Storage, Inc. | Queue management for data relocation |
US11354058B2 (en) | 2018-09-06 | 2022-06-07 | Pure Storage, Inc. | Local relocation of data stored at a storage device of a storage system |
US10454498B1 (en) | 2018-10-18 | 2019-10-22 | Pure Storage, Inc. | Fully pipelined hardware engine design for fast and efficient inline lossless data compression |
US10976947B2 (en) | 2018-10-26 | 2021-04-13 | Pure Storage, Inc. | Dynamically selecting segment heights in a heterogeneous RAID group |
US11334254B2 (en) | 2019-03-29 | 2022-05-17 | Pure Storage, Inc. | Reliability based flash page sizing |
US11775189B2 (en) | 2019-04-03 | 2023-10-03 | Pure Storage, Inc. | Segment level heterogeneity |
US11099986B2 (en) | 2019-04-12 | 2021-08-24 | Pure Storage, Inc. | Efficient transfer of memory contents |
US11714572B2 (en) | 2019-06-19 | 2023-08-01 | Pure Storage, Inc. | Optimized data resiliency in a modular storage system |
US11281394B2 (en) | 2019-06-24 | 2022-03-22 | Pure Storage, Inc. | Replication across partitioning schemes in a distributed storage system |
US11893126B2 (en) | 2019-10-14 | 2024-02-06 | Pure Storage, Inc. | Data deletion for a multi-tenant environment |
US11847331B2 (en) | 2019-12-12 | 2023-12-19 | Pure Storage, Inc. | Budgeting open blocks of a storage unit based on power loss prevention |
US11704192B2 (en) | 2019-12-12 | 2023-07-18 | Pure Storage, Inc. | Budgeting open blocks based on power loss protection |
US11416144B2 (en) | 2019-12-12 | 2022-08-16 | Pure Storage, Inc. | Dynamic use of segment or zone power loss protection in a flash device |
US11188432B2 (en) | 2020-02-28 | 2021-11-30 | Pure Storage, Inc. | Data resiliency by partially deallocating data blocks of a storage device |
US11507297B2 (en) | 2020-04-15 | 2022-11-22 | Pure Storage, Inc. | Efficient management of optimal read levels for flash storage systems |
US11256587B2 (en) | 2020-04-17 | 2022-02-22 | Pure Storage, Inc. | Intelligent access to a storage device |
US11416338B2 (en) | 2020-04-24 | 2022-08-16 | Pure Storage, Inc. | Resiliency scheme to enhance storage performance |
US11474986B2 (en) | 2020-04-24 | 2022-10-18 | Pure Storage, Inc. | Utilizing machine learning to streamline telemetry processing of storage media |
US11544228B2 (en) * | 2020-05-07 | 2023-01-03 | Hewlett Packard Enterprise Development Lp | Assignment of quora values to nodes based on importance of the nodes |
US11768763B2 (en) | 2020-07-08 | 2023-09-26 | Pure Storage, Inc. | Flash secure erase |
US11513974B2 (en) | 2020-09-08 | 2022-11-29 | Pure Storage, Inc. | Using nonce to control erasure of data blocks of a multi-controller storage system |
US11681448B2 (en) | 2020-09-08 | 2023-06-20 | Pure Storage, Inc. | Multiple device IDs in a multi-fabric module storage system |
US11487455B2 (en) | 2020-12-17 | 2022-11-01 | Pure Storage, Inc. | Dynamic block allocation to optimize storage system performance |
US11614880B2 (en) | 2020-12-31 | 2023-03-28 | Pure Storage, Inc. | Storage system with selectable write paths |
US11847324B2 (en) | 2020-12-31 | 2023-12-19 | Pure Storage, Inc. | Optimizing resiliency groups for data regions of a storage system |
US11630593B2 (en) | 2021-03-12 | 2023-04-18 | Pure Storage, Inc. | Inline flash memory qualification in a storage system |
US11507597B2 (en) | 2021-03-31 | 2022-11-22 | Pure Storage, Inc. | Data replication to meet a recovery point objective |
US11832410B2 (en) | 2021-09-14 | 2023-11-28 | Pure Storage, Inc. | Mechanical energy absorbing bracket apparatus |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7434097B2 (en) * | 2003-06-05 | 2008-10-07 | Copan System, Inc. | Method and apparatus for efficient fault-tolerant disk drive replacement in raid storage systems |
US7761244B2 (en) * | 2007-09-07 | 2010-07-20 | Oracle America, Inc. | Selectively mitigating multiple vibration sources in a computer system |
US7802019B2 (en) * | 2005-06-14 | 2010-09-21 | Microsoft Corporation | Hard disk drive condition reporting and error correction |
US8204716B2 (en) * | 2009-06-25 | 2012-06-19 | Oracle America, Inc. | System and method for characterizing vibration of a rack structure |
US20120182641A1 (en) * | 2011-01-14 | 2012-07-19 | International Business Machines Corporation | Hard Disk Drive Availability Following Transient Vibration |
US20130058034A1 (en) * | 2011-09-07 | 2013-03-07 | Hitachi, Ltd. | Disk unit and disk array apparatus |
US8549219B1 (en) * | 2010-12-07 | 2013-10-01 | Hewlett-Packard Development Company, L.P. | Preventing hard drive failure and data loss due to vibration |
US20130258521A1 (en) * | 2012-03-27 | 2013-10-03 | Wistron Corporation | Management Module, Storage System, and Method of Temperature and Vibration Management Thereof |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8271140B2 (en) | 2006-08-25 | 2012-09-18 | International Business Machines Corporation | Periodic rotational vibration check for storage devices to compensate for varying loads |
US7694188B2 (en) | 2007-02-05 | 2010-04-06 | Microsoft Corporation | Disk failure prevention and error correction |
-
2013
- 2013-03-07 US US13/788,548 patent/US8824261B1/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7434097B2 (en) * | 2003-06-05 | 2008-10-07 | Copan System, Inc. | Method and apparatus for efficient fault-tolerant disk drive replacement in raid storage systems |
US7802019B2 (en) * | 2005-06-14 | 2010-09-21 | Microsoft Corporation | Hard disk drive condition reporting and error correction |
US7761244B2 (en) * | 2007-09-07 | 2010-07-20 | Oracle America, Inc. | Selectively mitigating multiple vibration sources in a computer system |
US8204716B2 (en) * | 2009-06-25 | 2012-06-19 | Oracle America, Inc. | System and method for characterizing vibration of a rack structure |
US8549219B1 (en) * | 2010-12-07 | 2013-10-01 | Hewlett-Packard Development Company, L.P. | Preventing hard drive failure and data loss due to vibration |
US20120182641A1 (en) * | 2011-01-14 | 2012-07-19 | International Business Machines Corporation | Hard Disk Drive Availability Following Transient Vibration |
US20130058034A1 (en) * | 2011-09-07 | 2013-03-07 | Hitachi, Ltd. | Disk unit and disk array apparatus |
US20130258521A1 (en) * | 2012-03-27 | 2013-10-03 | Wistron Corporation | Management Module, Storage System, and Method of Temperature and Vibration Management Thereof |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140344531A1 (en) * | 2013-05-15 | 2014-11-20 | Amazon Technologies, Inc. | Reducing interference through controlled data access |
US9378075B2 (en) * | 2013-05-15 | 2016-06-28 | Amazon Technologies, Inc. | Reducing interference through controlled data access |
US9697063B2 (en) | 2013-05-15 | 2017-07-04 | Amazon Technologies, Inc. | Allocating data based on hardware faults |
US11119669B2 (en) * | 2017-08-02 | 2021-09-14 | Seagate Technology Llc | External indicators for adaptive in-field recalibration |
US20210365196A1 (en) * | 2017-08-02 | 2021-11-25 | Seagate Technology Llc | External indicators for adaptive in-field recalibration |
Also Published As
Publication number | Publication date |
---|---|
US8824261B1 (en) | 2014-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8824261B1 (en) | Peer to peer vibration mitigation | |
KR102151628B1 (en) | Ssd driven system level thermal management | |
US9798499B2 (en) | Hybrid-device storage based on environmental state | |
US9996477B2 (en) | Asynchronous drive telemetry data notification | |
US9927853B2 (en) | System and method for predicting and mitigating corrosion in an information handling system | |
WO2017125014A1 (en) | Method and device for monitoring hard disk | |
CN101216750A (en) | System, method, and module for reducing power states for storage devices and associated logical volumes | |
US11868625B2 (en) | Alert tracking in storage | |
JP2015114873A (en) | Information processor and monitoring method | |
TW201616371A (en) | System for retrieving console messages and method thereof and non-transitory computer-readable medium | |
JP2017091077A (en) | Pseudo-fault generation program, generation method, and generator | |
US20210365196A1 (en) | External indicators for adaptive in-field recalibration | |
US9454485B2 (en) | Sharing local cache from a failover node | |
JP2008027240A (en) | Disk array device, patrol diagnostic method, and patrol diagnostic control program | |
US11237893B2 (en) | Use of error correction-based metric for identifying poorly performing data storage devices | |
US10437270B2 (en) | Systems and methods for reporting of excessive vibration conditions in a storage resource | |
US9384077B2 (en) | Storage control apparatus and method for controlling storage apparatus | |
JP2016004592A (en) | Control device, diagnosis control program and diagnosis control method | |
JP2012038362A (en) | Hard disk failure sign detection method | |
JP6996602B1 (en) | BMC, server system, device stability determination method and program | |
US20140259023A1 (en) | Adaptive vibration mitigation | |
US20160105510A1 (en) | Data storage system with information exchange mechanism and method of operation thereof | |
US20200264946A1 (en) | Failure sign detection device, failure sign detection method, and recording medium in which failure sign detection program is stored | |
CN112015600A (en) | Log information processing system, log information processing method and device and switch | |
CN101741600A (en) | Server system, recording equipment and management method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SEAGATE TECHNOLOGY LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MILLER, MICHAEL HOWARD;BOHN, RICHARD ESTEN;SIGNING DATES FROM 20130305 TO 20130306;REEL/FRAME:029942/0682 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551) Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |