US20160191665A1 - Computing system with distributed compute-enabled storage group and method of operation thereof - Google Patents

Computing system with distributed compute-enabled storage group and method of operation thereof Download PDF

Info

Publication number
US20160191665A1
US20160191665A1 US14/817,815 US201514817815A US2016191665A1 US 20160191665 A1 US20160191665 A1 US 20160191665A1 US 201514817815 A US201514817815 A US 201514817815A US 2016191665 A1 US2016191665 A1 US 2016191665A1
Authority
US
United States
Prior art keywords
application
storage
data
output
storage processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/817,815
Inventor
Yangwook Kang
Yang Seok KI
Dongchul Park
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to US14/817,815 priority Critical patent/US20160191665A1/en
Assigned to SAMSUNG ELECTRONICS CO., LTD. reassignment SAMSUNG ELECTRONICS CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KANG, YANGWOOK, KI, YANG SEOK, PARK, DONGCHUL
Priority to KR1020150190827A priority patent/KR102581374B1/en
Publication of US20160191665A1 publication Critical patent/US20160191665A1/en
Priority to US17/955,310 priority patent/US20230028569A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • H04L67/327
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Definitions

  • Embodiments relate generally to a computing system, and more particularly to a system with distribute compute-enabled storage.
  • Modern consumer and industrial electronics such as computing systems, servers, appliances, televisions, cellular phones, automobiles, satellites, and combination devices, are providing increasing levels of functionality to support modern life. These devices are more interconnected. Storage of information is becoming more of a necessity.
  • An embodiment provides an apparatus, including: a storage device configured to perform in-storage processing with formatted data based on application data from an application; and return an in-storage processing output to the application for continued execution.
  • An embodiment provides a method including: performing in-storage processing with a storage device with formatted data based on application data from an application; and returning an in-storage processing output from the storage device to the application for continued execution.
  • FIG. 1 is a computing system with distributed compute-enabled storage group in an embodiment of the present invention.
  • FIG. 2 is an example of an architectural view of a computing system with a distributed compute-enabled storage device.
  • FIG. 3 is an example of an operational view for a split function of the data preprocessor.
  • FIG. 4 is an example of an operational view for a split+padding function of the data preprocessor.
  • FIG. 5 is an example of an operational view for a split+redundancy function of the data preprocessor.
  • FIG. 6 is an example of an operational view for a mirroring function of the data preprocessor.
  • FIG. 7 is an example of an architectural view of the output coordinator.
  • FIGS. 8A and 8B are detailed examples of an operational view of the split and split+padding functions.
  • FIG. 9 is an example of an architectural view of the computing system in an embodiment.
  • FIG. 10 is an example of an architectural view of the computing system in a further embodiment.
  • FIG. 11 is an example of an architectural view of the computing system in a yet further embodiment.
  • FIG. 12 is an example of an operational view of the computing system issuing device requests for in-storage processing in a centralized coordination model.
  • FIG. 13 is an example of an operational view of the computing system issuing device requests for in-storage processing in a decentralized coordination model.
  • FIG. 14 is an operational view for the computing system of a centralized coordination model.
  • FIG. 15 is an operational view for a computing system in a decentralized model in an embodiment with one output coordinator
  • FIG. 16 is an operational view of a computing system in a decentralized model in an embodiment with multiple output coordinators.
  • FIG. 17 is an example of a flow chart for the request distributor and the data preprocessor.
  • FIG. 18 is an example of a flow chart for a mirroring function for centralized and decentralized embodiments.
  • FIG. 19 is a flow chart of a method of operation of a computing system in an embodiment of the present invention.
  • Various embodiments provide a computing system for efficient distributed processing by providing methods and apparatus for performing in-storage processing with multiple storage devices with capabilities for performing in-storage processing of the application data.
  • An execution of an application can be shared by distributing the execution among various storage devices in a storage device.
  • Each of the storage devices can perform in-storage processing with the application data as requested by an application request.
  • Various embodiments provide a computing system to reduce overall system power consumption by reducing the number of inputs/outputs between the application execution and the storage device. This reduction is achieved by having the storage devices perform in-storage processing instead of mere storage, read, and re-store by the application. Instead, the in-storage processing outputs can be returned as an aggregated output from the various storage devices that performed the in-storage processing, back to the application. The application can continue to execute and utilize the in-storage outputs, the aggregated output, or a combination thereof.
  • Various embodiments provide a computing system that reduces total cost of ownership by providing formatting and translation functions for the application data for different configurations or organizations of the storage device. Further, the computing system also provides translation for the in-storage processing to be carried out by the various storage devices as part of the storage group. Examples of types of translation or formatting include split, split+padding, split+redundancy, and mirroring.
  • Various embodiments provide a computing system that also minimizes integration by allowing the storage devices to handle more of the in-storage processing coordination functions, with less being done by the host execution the application. Another embodiment allows for the in-storage processing coordination to increasingly be located and operate outside of both the host and the storage devices.
  • Various embodiments provide a computing system with more efficient execution of the application with less interrupts to the application by coordinating the outputs of the in-storage processing from the storage devices.
  • the output coordination can buffer the in-storage processing outputs and can also sort the order of each of the in-storage processing outputs before returning an aggregated output to the application.
  • the application can continue to execute and utilize the in-storage outputs, the aggregated output, or a combination thereof.
  • Various embodiments provide a computing system further minimizing integration obstacles by allowing the storage devices in the storage group to have different or the same functionalities.
  • one of the storage devices can function as the only output coordinator for all the in-storage processing outputs from the other storage devices.
  • the aggregation function can be distributed amongst the storage devices, passing along from storage device to storage device and performing partial aggregation at each storage device, until a final one of the storage devices returns the full aggregated output back to the application.
  • the application can continue to execute and utilize the in-storage outputs, the aggregated output, or a combination thereof.
  • module can include software, hardware, or a combination thereof in an embodiment of the present invention in accordance with the context in which the term is used.
  • the software can be machine code, firmware, embedded code, application software, or a combination thereof.
  • the hardware can be circuitry, processor, computer, integrated circuit, integrated circuit cores, a pressure sensor, an inertial sensor, a microelectromechanical system (MEMS), passive devices, or a combination thereof. Additional examples of hardware circuitry can be digital circuits or logic, analog circuits, mixed-mode circuits, optical circuits, or a combination thereof. Further, if a module is written in the apparatus claims section below, the modules are deemed to include hardware circuitry for the purposes and the scope of apparatus claims.
  • the modules in the following description of the embodiments can be coupled to one another as described or as shown.
  • the coupling can be direct or indirect without or with, respectively, intervening between coupled items.
  • the coupling can be physical contact or by communication between items.
  • FIG. 1 therein is shown a computing system 100 with a data protection mechanism in an embodiment of the present invention.
  • the computing system 100 is depicted in FIG. 1 as a functional block diagram of the computing system 100 with a data storage system 101 .
  • the functional block diagram depicts the data storage system 101 installed in a host computer 102 .
  • Various embodiments can include the computing system 100 with devices for storage, such as a solid state disk 110 , a non-volatile memory 112 , hard disk drives 116 , memory devices 117 , and network attached storage 122 .
  • These devices for storage can include capabilities to perform in-storage processing, that is, to independently perform relatively complex computations at a location outside of a traditional system CPU.
  • various embodiments of the present inventive concept manage the distribution of data, the location of data, and the location of processing tasks for in-storage processing.
  • these in-storage computing enabled storage devices can be grouped or clustered into arrays.
  • Various embodiments manage the allocation of data and/or processing based on the architecture and capabilities of these devices or arrays. In-storage processing is further explained later.
  • the host computer 102 can be as a server or workstation.
  • the host computer 102 can include at least a central processing unit 104 , host memory 106 coupled to the central processing unit 104 , and a host bus controller 108 .
  • the host bus controller 108 provides a host interface bus 114 , which allows the host computer 102 to utilize the data storage system 101 .
  • the central processing unit 104 can be implemented with hardware circuitry in a number of different manners.
  • the central processing unit 104 can be a processor, an application specific integrated circuit (ASIC) an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), a field programmable gate array (FPGA), or a combination thereof.
  • ASIC application specific integrated circuit
  • FSM hardware finite state machine
  • DSP digital signal processor
  • FPGA field programmable gate array
  • the data storage system 101 can be coupled to a solid state disk 110 , such as a non-volatile memory based storage group having a peripheral interface system, or a non-volatile memory 112 , such as an internal memory card for expanded or extended non-volatile system memory.
  • a solid state disk 110 such as a non-volatile memory based storage group having a peripheral interface system, or a non-volatile memory 112 , such as an internal memory card for expanded or extended non-volatile system memory.
  • the data storage system 101 can also be coupled to hard disk drives (HDD) 116 that can be mounted in the host computer 102 , external to the host computer 102 , or a combination thereof.
  • HDD hard disk drives
  • the solid state disk 110 , the non-volatile memory 112 , and the hard disk drives 116 can be considered as direct attached storage (DAS) devices, as an example.
  • DAS direct attached storage
  • the data storage system 101 can also support a network attach port 118 for coupling to a network 120 .
  • Examples of the network 120 can include a personal area network (PAN), a local area network (LAN), a storage area network (SAN), a wide area network (WAN), or a combination thereof.
  • the network attach port 118 can provide access to network attached storage (NAS) 122 .
  • the network attach port 118 can also provide connection to and from the host bus controller 108 .
  • network attached storage 122 are shown as hard disk drives, this is an example only. It is understood that the network attached storage 122 could include any non-volatile storage technology, such as magnetic tape storage (not shown), storage devices similar to the solid state disk 110 , non-volatile memory 112 , or hard disk drives 116 that are accessed through the network attach port 118 . Also, the network attached storage 122 can include aggregated resources, such as just a bunch of disks (JBOD) systems or redundant array of intelligent disks (RAID) systems as well as other network attached storage 122 .
  • JBOD just a bunch of disks
  • RAID redundant array of intelligent disks
  • the data storage system 101 can be attached to the host interface bus 114 for providing access to and interfacing to multiple of the direct attached storage (DAS) devices via a cable 124 for storage interface, such as Serial Advanced Technology Attachment (SATA), the Serial Attached SCSI (SAS), or the Peripheral Component Interconnect—Express (PCI-e) attached storage devices.
  • DAS direct attached storage
  • PCI-e Peripheral Component Interconnect—Express
  • the data storage system 101 can include a storage engine 115 and memory cache 117 .
  • the storage engine 115 can be implemented with hardware circuitry, software, or a combination thereof in a number of ways.
  • the storage engine 115 can be implemented as a processor, an application specific integrated circuit (ASIC), an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), FPGA, or a combination thereof.
  • ASIC application specific integrated circuit
  • FSM hardware finite state machine
  • DSP digital signal processor
  • the central processing unit 104 or the storage engine 115 can control the flow and management of data to and from the host computer 102 , and from and to the direct attached storage (DAS) devices, the network attached storage 122 , or a combination thereof.
  • the storage engine 115 can also perform data reliability check and correction, which will be further discussed later.
  • the storage engine 115 can also control and manage the flow of data between the direct attached storage (DAS) devices and the network attached storage 122 and amongst themselves.
  • the storage engine 115 can be implemented in hardware circuitry, a processor running software, or a combination thereof.
  • the storage engine 115 is shown as part of the data storage system 101 , although the storage engine 115 can be implemented and partitioned differently.
  • the storage engine 115 can be implemented as part of in the host computer 102 , implemented in software, implemented in hardware, or a combination thereof.
  • the storage engine 115 can be external to the data storage system 101 .
  • the storage engine 115 can be part of the direct attached storage (DAS) devices described above, the network attached storage 122 , or a combination thereof.
  • the functionalities of the storage engine 115 can be distributed as part of the host computer 102 , the direct attached storage (DAS) devices, the network attached storage 122 , or a combination thereof.
  • the central processing unit 104 or some portion of it can also be in the data storage system 101 , the direct attached storage (DAS) devices, the network attached storage 122 , or a combination thereof.
  • the memory devices 117 can function as a local cache to the data storage system 101 , the computing system 100 , or a combination thereof.
  • the memory devices 117 can be a volatile memory or a nonvolatile memory. Examples of the volatile memory can include static random access memory (SRAM) or dynamic random access memory (DRAM).
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • the storage engine 115 and the memory devices 117 enable the data storage system 101 to meet the performance requirements of data provided by the host computer 102 and store that data in the solid state disk 110 , the non-volatile memory 112 , the hard disk drives 116 , or the network attached storage 122 .
  • the data storage system 101 is shown as part of the host computer 102 , although the data storage system 101 can be implemented and partitioned differently.
  • the data storage system 101 can be implemented as a plug-in card in the host computer 102 , as part of a chip or chipset in the host computer 102 , as partially implement in software and partially implemented in hardware in the host computer 102 , or a combination thereof.
  • the data storage system 101 can be external to the host computer 102 .
  • the data storage system 101 can be part of the direct attached storage (DAS) devices described above, the network attached storage 122 , or a combination thereof.
  • the data storage system 101 can be distributed as part of the host computer 102 , the direct attached storage (DAS) devices, the network attached storage 122 , or a combination thereof.
  • DAS direct attached storage
  • FIG. 2 therein is shown an architectural view of a computing system 100 with a distributed compute-enabled storage device.
  • the architectural view can depict an example of relationships between some parts in the computing system 100 .
  • the architectural view can depict the computing system 100 to include an application 202 , an in-storage processing coordinator 204 , and a storage group 206 .
  • the storage group 206 can be partitioned in the computing system 100 of FIG. 1 in a number of ways.
  • the storage group 206 can be part of or distributed among the data storage system 101 of FIG. 1 , the hard disk drives 116 of FIG. 1 , the network attached storage 122 of FIG. 1 , the solid state disk 110 of FIG. 1 , the non-volatile memory 112 of FIG. 1 , or a combination thereof.
  • the application 202 is a process executing a function.
  • the application 202 can provide an end-user (not shown) function or other functions related to the operation, control, usage, or communication of the computing system 100 .
  • the application 202 can be a software application executed by a processor, a central processing unit (CPU), a programmable hardware state machine, or other hardware circuitry that can execute software code from the software application.
  • the application 202 can be a function executed purely in hardware circuitry, such as logic gates, finite state machine (FSM), transistors, or a combination thereof.
  • the application 202 can execute on the central processing unit 104 of FIG. 1 .
  • the in-storage processing coordinator 204 manages the communication and activities between the application 202 and the storage group 206 .
  • the in-storage processing coordinator 204 can manage the operations between the application 202 and the storage group 206 .
  • the in-storage processing coordinator 204 can translate information between the application 202 and the storage group 206 .
  • the in-storage processing coordinator 204 can—direct information flow and assignments between the application 202 and the storage group 206 .
  • the in-storage processing coordinator 204 can include a data preprocessor 208 , a request distributor 210 , and an output coordinator 212 .
  • the in-storage processing coordinator 204 or portions of it can be executed by the central processing unit 104 or other parts of the host computer 102 .
  • the in-storage processing coordinator 204 or portions of it can also be executed by the data storage system 101 .
  • the storage engine 115 of FIG. 1 can execute the in-storage processing coordinator 204 or portions of it.
  • the hard disk drives 116 of FIG. 1 , the network attached storage 122 of FIG. 1 , the solid state disk 110 of FIG. 1 , the non-volatile memory 112 of FIG. 1 , or a combination thereof can execute the in-storage processing coordinator 204 or portions of it.
  • the data preprocessor 208 performs data formatting of application data 214 and placement of formatted data 216 .
  • the application data 214 is the information or data generated by the application 202 .
  • the formatting can enable storing the application data 214 as formatted data 216 across multiple storage devices 218 for in-storage processing (ISP) to be stored in the storage group 206 .
  • ISP in-storage processing
  • In-storage processing refers to the processing or manipulation of the formatted data 216 to be sent back to the application 202 or the system executing the application 202 .
  • the in-storage processing is more than mere storing and retrieval of the formatted data 216 .
  • Examples of the manipulation or processing as part of the in-storage processing can include integer or floating point math operations, Boolean operations, reorganization of data bits or symbols, or a combination thereof.
  • Other examples of manipulating or processing as part of the in-storage processing can include search, sort, compares, filtering, combining the formatted data 216 , the application data 214 , or a combination thereof.
  • the data preprocessor 208 can format the application data 214 from the application 202 and generate the formatted data 216 to be processed outside or independent from execution of the application 202 . This independent processing can be performed with the in-storage processing.
  • the application data 214 can be independent of and not necessarily the same format as those stored in the storage group 206 .
  • the format of the application data 214 can be different than the formatted data 216 , which will be described later.
  • the application data 214 can be processed in various ways.
  • the policies can refer to availability requirements so as to affect the array configuration, such as mirroring, of the storage group 206 .
  • the policies can refer to performance requirements as to further affect the array configuration, such as striping, of the storage group 206 .
  • the application data 214 can be translated to the formatted data 216 using various methods, such as split, split+padding, split+redundancy, and mirroring. These methods can create independent data sets of the formatted data 216 that can be distributed to multiple storage devices 218 , allowing for concurrent in-storage processing.
  • the concurrent in-storage processing refers to each of the storage devices 218 in the storage group 206 being able to independently process or operate on the formatted data 216 , the application data 214 , or a combination thereof.
  • This independent processing or operation can be independent of the execution of the application 202 , the other storage devices 218 of the storage group 206 that received some of the formatted data 216 from the application data 214 , or a combination thereof.
  • the request distributor 210 manages application requests 220 between the application 202 and the storage group 206 .
  • the request distributor 210 accepts the application requests 220 from the application 202 and distributes them.
  • the application requests 220 are actions between the application 202 and the storage group 206 based on the in-storage processing.
  • the application requests 220 can provide information from the application 202 to be off-loaded to the storage group 206 for in-storage processing. Furthering the example, the results of the in-storage processing can be returned to the application 202 based on the application requests 220 .
  • the request distributor 210 manages the application requests 220 from the application 202 for in-storage processing, for write or storage, or for output.
  • the request distributor 210 also distributes the application requests 220 from the application 202 across the multiple storage devices 218 in the storage group 206 .
  • incoming application requests 220 for in-storage processing can be split into multiple sub-application requests 222 to perform in-storage processing according to a distribution of the formatted data 216 , organization of the storage group 206 , or other policies.
  • the request distributor 210 can perform this split of the application request 220 for in-storage processing based on the placement scheme for the application data 214 , the formatted data 216 , or a combination thereof.
  • Example types of data placement schemes include a centralized scheme and decentralized scheme, discussed from FIGS. 9 to 11 .
  • the data preprocessor 208 is placed inside the in-storage processing coordinator 204 , while a decentralized model places the data preprocessor 208 inside the storage group 206 .
  • the in-storage processing coordinator 204 receives an application request 220 such as a data write request with required information, as an example, address, data, data length, and a logical boundary, from the application 202 , the request distributor 210 provides the data preprocessor 208 with the required information such as data, data length, and logical boundary. Then, the data preprocessor 208 partitions the data into multiple data chunks of an appropriate size based on the store unit information. Then, the request distributor 210 distributes the corresponding data chunks to each of the storage devices 218 with multiple sub-application requests 222 .
  • an application request 220 such as a data write request with required information, as an example, address, data, data length, and a logical boundary
  • the request distributor 210 provides the data preprocessor 208 with the required information such as data, data length, and logical boundary. Then, the data preprocessor 208 partitions the data into multiple data chunks of an appropriate size based on the store unit information. Then, the request distributor 210 distributes the
  • the storage group 206 , the storage devices 218 , or a combination thereof can receive the application requests 220 , the sub-application requests 222 , or a combination thereof.
  • the request distributor 210 in a decentralized model divides the data into a predefined size of chunks, for instance, data size/N, where N is the number of storage devices, and then distributes the chunks of data into each of the storage devices 218 with sub-application requests 222 combined with the required information such as address, data length, and a logical boundary. Then, the data preprocessor 208 inside storage devices 206 partitions the assigned data into smaller chunks based on the store unit information.
  • the request distributor 210 can send the write request to the data preprocessor 208 so that it can determine how to distribute the application data 214 . Once data distribution is determined, the request distributor 210 can issue the write request to the storage devices 218 in the storage group 206 .
  • the host bus controller 108 of FIG. 8 or the network attach port 118 of FIG. 1 can be used to execute the request distributor 210 and issue the application requests 220 .
  • the storage devices 218 can perform the in-storage processing on the formatted data 216 .
  • the request distributor 210 can process the request for output by forwarding the output request to the in-storage processing coordinator 204 , or as a specific example to the output coordinator 212 , to send in-storage processing outputs 224 back to the application 202 or the system executing the application 202 .
  • the application can continue to execute with the in-storage processing outputs 224 .
  • the in-storage processing outputs 224 can be the results of the in-storage processing by the storage group 206 of the formatted data 216 .
  • the in-storage processing outputs 224 are not a mere read-back or read of the formatted data 216 stored in the storage group 206 .
  • the output coordinator 212 can manage processed data generated from each of the multiple storage devices 218 of the storage group 206 and can send it back to the application 202 .
  • the output coordinator 212 collects the results or the in-storage processing outputs 224 and provides them to the application 202 or various applications 202 or the system executing the application 202 .
  • the output coordinator 212 will be described later.
  • the computing system 100 also can provide error handling capabilities. For example, when one or more of the storage devices 218 in the storage group 206 become inaccessible or has a slower performance, the application requests 220 can fail, such as time-outs or non-completions. For better availability, the computing system 100 can perform a number of actions.
  • the in-storage processing coordinator 204 can maintain a request log that can be used to issue retries for the application requests 220 that failed or were not completed. Also as an example, the in-storage processing coordinator 204 can keep retrying the application requests 220 to write the application data 214 . As a further example, the in-storage processing coordinator 204 can report that status of the application requests 220 to the application 202 .
  • An example of the error recovery technique can be a redundant array of inexpensive disk (RAID) recovery with rebuilding a storage device 218 that has been striped.
  • the in-storage processing coordinator 204 can try the application requests 220 that previously failed. The in-storage processing coordinator 204 can also generate reports of failures even if the application requests 220 are redirected, retried, and even eventually successful.
  • the in-storage processing coordinator 204 or at least a portion of it can be implemented in a number of ways.
  • the in-storage processing coordinator 204 can be implemented with software, hardware circuitry, or a combination thereof.
  • hardware circuitry can include a processor, an application specific integrated circuit (ASIC) an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), FPGA, or a combination thereof.
  • FIG. 3 there is shown an example of an operational view for a split function of the data preprocessor 208 of FIG. 2 .
  • FIG. 3 depicts that application data 214 as input to the data preprocessor 208 or more generally the in-storage processing coordinator 204 of FIG. 2 .
  • FIG. 3 depicts one example method of the data formatting performed by the data preprocessor 208 as mentioned in FIG. 2 .
  • the data formatting is a split function or a split scheme.
  • FIG. 3 also depicts the formatted data 216 as the output of the data preprocessor 208 .
  • the amount of the application data 214 is shown to span a transfer length 302 .
  • the transfer length 302 refers to the amount of data or information sent by the application 202 to the data preprocessor 208 or vice versa.
  • the transfer length 302 can be a fixed size or variable depending on what the application 202 transfers for in-storage processing.
  • the application data 214 can include application units 304 .
  • the application units 304 are fields within or portions of the application data 214 .
  • Each of the application units 304 can be fixed in size or can be variable.
  • the application units 304 can represent partitioned portion or chunks of application data 214 .
  • the size of each of the application units 304 can be the same across the application data 214 . Also as an example, the size of each of the application units 304 across the application data 214 can differ for different transfers of the application data 214 . Further for example, the size of each of the application units 304 can vary within the same transfer or across transfers. The application units 304 can also vary in size depending on the different applications 202 sending the application data 214 . The number of application units 304 can vary or can be fixed. The number of application units 304 can vary for the same application 202 sending the application data 214 or between different applications 202 .
  • FIG. 3 also depicts the formatted data 216 as the output of the data preprocessor 208 .
  • the formatted data 216 can include formatted units 306 (FUs).
  • the formatted units 306 are fields within that formatted data 216 .
  • each of the formatted units 306 can be fixed in size or can be variable.
  • the size of the formatted units 306 can be the same for the formatted data 216 or different for transfers of the formatted data 216 , or can vary within the same transfer or across transfers.
  • the formatted units 306 can also vary in size depending on the different applications 202 sending the formatted data 216 .
  • the number of the formatted units 306 can vary or can be fixed.
  • the number of the formatted units 306 can vary for the same application 202 sending the formatted data 216 or between different applications 202 .
  • FIG. 3 depicts the formatted data 216 after a split formatting with the formatted units 306 overlaid visually with the application units 304 .
  • An example storage application for this split formatting or split scheme can be with redundant array of inexpensive disk (RAID) systems as the storage group 206 , or with at least some of the multiple storage devices 218 in the storage group 206 .
  • the in-storage processing, or even the mere storage of the application data 214 can at least involve splitting the application units 304 to different destination devices in the storage group 206 .
  • the data preprocessor 208 can split the application data 214 into a predefined fixed-length blocks referred to as the formatted units 306 and can give each block to one or more of the multiple storage devices 218 of the in-storage processing in a round robin fashion, as an example.
  • the split scheme can generate non-aligned data sets between the application data 214 and the formatted data 216 .
  • the data preprocessor 208 can generate the non-alignment between the application units 304 relative to the boundaries for the formatted units 306 .
  • FIG. 3 depicts an alternating formatting or allocation of the application units 304 to the different devices in the storage group 206 .
  • the application units 304 are depicted as “Data 1 , “Data 2 ”, “Data 3 ”, and through “Data K”.
  • the formatted units 306 are depicted as “FU 1 ”, “FU 2 ”, and through “FU N”.
  • the formatted data 216 can have alternating instances targeted for one device or another device in the storage group 206 .
  • odd numbered “FUs” can be for drive 1 and even numbered “FUs” can be for drive 0 .
  • the overlay of the application units 304 as “Data” is shown as not aligned with the boundaries of the “FU” and FIG. 3 depicts “Data 2 ” and “Data K” being split between FU 1 (drive 1 ) and FU 2 (drive 0 ), again for this example.
  • the formatted data 216 can also be stored on one of the storage device 218 as opposed to being partitioned or allocated to different instances of the storage devices 218 in the storage group 206 .
  • the formatted units 306 can be sized for a sector based on a physical block address or a logical block address on one of the storage devices 218 as a hard disk drive or a solid state disk drive.
  • the request distributor 210 can initially send up to N in-storage processing application requests 220 to the storage devices 218 in the storage group 206 .
  • N is an integer number.
  • the application units 304 that are not aligned with the formatted units 306 such as “Data 2 ” and “Data K” in this figure and example, can undergo additional processing at the storage devices 218 with in-storage processing.
  • the non-aligned application units 304 can be determined after initial processing of the application data 214 by the host computer 102 of FIG. 1 , the request distributor 210 , or other storage devices 218 .
  • the non-aligned application units 304 can by fetched by the host computer 102 or the request distributor 210 allowing the non-aligned application units 304 to be concurrently processed by the host computer 102 , the request distributor 210 , the storage devices 218 in the storage group 206 , or a combination thereof.
  • the non-aligned application units 304 can also be fetched by the host computer 102 or the request distributor 210 such that these non-aligned application units 304 can be written back to the devices for in-storage processing.
  • Each of the storage devices 218 can send the results of the processed non-aligned application units 304 to the host computer 102 , the request distributor 210 , or the other storage devices 218 so the host computer 102 or the other storage devices 218 can continue to process the application data 214 .
  • FIG. 4 therein is shown an example of an operational view for a split+padding function of the data preprocessor 208 of FIG. 2 .
  • FIG. 4 depicts the application data 214 as input to the data preprocessor 208 as similarly described in FIG. 3 .
  • FIG. 4 also depicts the formatted data 216 as the output of the data preprocessor 208 .
  • FIG. 4 depicts one example method of the data formatting performed by the data preprocessor 208 as mentioned in FIG. 2 .
  • the data formatting is a split+padding function or split+padding scheme.
  • the split+padding function by the data preprocessor 208 adds data pads 402 to align the application units 304 to the formatted units 306 .
  • the alignment of the application units 304 and the formatted units 306 can allow the request distributor 210 of FIG. 2 to send up to K independent in-storage processing application requests 220 to multiple storage devices 218 of FIG. 2 in the storage group 206 of FIG. 2 .
  • K is an integer.
  • the alignment allows for each of the multiple storage devices 218 to perform in-storage processing of the application units 304 , the formatted units 306 , or a combination thereof independently without requiring further formatting or processing required on the formatted data 216 .
  • each of the formatted units 306 includes one of the application units 304 plus one of the data pads 402 .
  • Each of the data pads 402 aligns each of the application units 304 to the boundaries of each of the formatted units 306 .
  • the data pads 402 can also provide other functions or include other information.
  • the data pads 402 can include error detection or error correction information, such as parity, ECC protection, meta-data, etc.
  • the data pads 402 can be placed or located in a number of different locations within the formatted units 306 .
  • one of the data pads 402 can be located at the end of one of the application units 304 as shown in FIG. 4 .
  • each of the data pads 402 can also be located at the beginning of each of the formatted units 306 and before each of the application units 304 .
  • each of the data pads 402 can be distributed, uniformly or non-uniformly, across each of the formatted units 306 and within each of the application units 304 .
  • each of the data pads 402 can depend on the difference in size between each of the application units 304 and each of the formatted units 306 .
  • the data pads 402 can be the same for each of the formatted units 306 or can vary.
  • the term size can refer to the number of bits or symbols for the formatted units 306 , the application units 304 , or a combination thereof.
  • the term size can also refer to the transmission time, recording time, or a combination thereof for the formatted units 306 , the application units 304 , or a combination thereof.
  • the size of the application data 214 is shown to span the transfer length 302 as similarly described in FIG. 3 .
  • the application units 304 are depicted as “Data 1 ”, “Data 2 ′′”, “Data 3 ”, and through “Data K”.
  • the formatted units 306 are depicted as “FU 1 ”, “FU 2 ”, and through “FU N”.
  • FIG. 4 depicts the formatted data 216 after a split+padding formatting with the formatted units 306 overlaid visually with the application units 304 and with the data pads 402 .
  • An example storage application for this split+padding formatting or scheme can be with use of redundant array of inexpensive disk (RAID) systems as the storage group 206 or with at least some of the multiple storage devices 218 in the storage group 206 .
  • the in-storage processing or even the mere storage of the application data 214 can at least involve splitting+padding the application units 304 to different destination devices in the storage group 206 .
  • the data preprocessor 208 can split the application data 214 with the data pads 402 into a predefined length, and gives each length to one or more of the storage devices 218 for in-storage processing.
  • a length can include any number of the application units 304 .
  • the formatted data 216 of any length can be targeted for one or more of the multiple storage devices 218 in the storage group 206 .
  • FIG. 5 therein is shown an example of an operational view for a split+redundancy function of the data preprocessor 208 of FIG. 2 .
  • FIG. 5 depicts the application data 214 as input to the data preprocessor 208 as similarly described in FIG. 3 .
  • the application data 214 can include the application units 304 .
  • FIG. 5 also depicts the formatted data 216 as the output of the data preprocessor 208 as similarly described in FIG. 3 .
  • the formatted data 216 can include the formatted units 306 .
  • the split+redundancy function can process the aligned and non-aligned application units 304 .
  • the in-storage processing in each of the storage devices 218 of FIG. 2 in the storage group 206 of FIG. 2 can process the aligned application units 304 separately, the non-aligned application units 304 separately, or both at the same time.
  • the data preprocessor 208 is performing the split+redundancy function or the split+redundancy scheme.
  • the split+padding function can split the application data 214 to formatted data 216 of fixed length, variable length, or a combination thereof.
  • the data preprocessor 208 does not necessarily need to manipulate the application data 214 , the application units 304 , or a combination thereof that are non-aligned to the formatted units 306 as the split function described in FIG. 3 . This is depicted as the first row of the formatted data 216 in FIG. 5 and is redundancy data 502 .
  • the formatted data 216 generated from the split+redundancy function includes the redundancy data 502 .
  • the redundancy data 502 can be an output of the data preprocessor 208 mapping the application data 214 , or as a more specific example the application units 304 to the formatted data 216 and across the formatted units 306 even with some of the application units 304 nonaligned with the formatted units 306 .
  • some of the application units 304 fall within the boundary of one of the formatted units 306 and these application units 304 are considered aligned.
  • Other instances of the application units 304 traverses multiple instances of the formatted units 306 and these application units 304 are considered nonaligned.
  • the application units 304 depicted as “Data 2 ” and “Data K” each span across two and adjacent instances of the formatted units 306 .
  • the split+redundancy function can also perform the split+padding function to some of the application units 304 .
  • the data preprocessor 208 can store the application units 304 that are not aligned to the formatted units 306 . This is depicted in the second row of the formatted data 216 of FIG. 5 and is an aligned data 504 .
  • the data preprocessor 208 can perform the split+padding function as described in FIG. 4 to form the aligned data 504 .
  • the application units 304 “Data 2 ” and “Data K” are not aligned to or traverses multiple instances of the formatted units 306 .
  • the aligned data 504 generated by the data preprocessor 208 includes the data pads 402 to these instances of the nonaligned application units 304 in the redundancy data 502 .
  • the split+redundancy function allows the in-storage processing coordinator 204 to send up to N+M requests to the storage devices 218 in the storage group 206 .
  • N and M are integers.
  • N represents the number of formatted units 306 in the redundancy data 502 .
  • M represents the additional formatted units 306 in the aligned data 504 .
  • the non-aligned application units 304 in the redundancy data 502 can be ignored.
  • FIG. 6 depicts the formatted data 216 as the output of the data preprocessor 208 as similarly described in FIG. 3 .
  • the formatted data 216 of FIG. 2 can include the formatted units 306 of FIG. 3 .
  • the application data 214 of FIG. 3 can be processed by the data preprocessor 208 .
  • FIG. 6 depicts the multiple storage devices 218 as “Device 1 ” through “Device r” for the replica data 602 .
  • Replicated units 604 are the application units 304 of FIG. 3 that are replicated and are shown as “Data 1 ”, “Data 2 ”, “Data 3 ” through “Data K” on “Device 1 ” through “Device r”.
  • One of the storage devices 218 can store the application data 214 as the formatted data 216 for that storage device 218 .
  • Some of the other storage devices 218 can store the replica data 602 and the replicated units 604 .
  • the data preprocessor 208 does not manipulate the application units 304 or the application data 214 as a whole. However, the data preprocessor 208 can collect or store mirroring information and the application units 304 . Also, the in-storage processing coordinator 204 can receive the application data 214 or the application units 304 from the application 202 when processing for efficient, concurrent in-storage processing.
  • the in-storage processing coordinator 204 or the data preprocessor 208 can perform the mirroring functions in a number of ways. As an example, the in-storage processing coordinator 204 or the data preprocessor 208 can take into account factors for mirroring the application data 214 to the formatted data 216 . One factor is the number of target devices from the multiple storage devices 218 . Another factor is the size of the application data 214 , the application units 304 of FIG. 3 , or a combination thereof. A further factor is the size of the formatted data 216 , the formatted units 306 , or a combination thereof.
  • the output coordinator 212 manages the in-storage processing outputs 224 generated from each of the multiple storage devices 218 of the storage group 206 and sends it back to the application 202 .
  • the output coordinator 212 can manage the interaction with the application 202 in a number of ways.
  • the output coordinator 212 can collect the in-storage processing outputs 224 from the storage devices 218 .
  • the output coordinator 212 can fetch the in-storage processing outputs 224 or their locations from each of the storage devices 218 that performed the in-storage processing of the application data 214 of FIG. 2 , the formatted data 216 of FIG. 2 , or a combination thereof.
  • the output coordinator 212 can fetch the in-storage processing outputs 224 in a number of ways.
  • the output coordinator 212 can utilize a direct memory access (DMA) with the storage devices 218 .
  • DMA transfers are transfer mechanisms not requiring a processor or a computing resource to manage the actual transfer once the transfer is setup.
  • the output coordinator 212 can utilized a programmed input/output (PIO) with the storage devices 218 .
  • PIO transfers are transfer mechanism where a processor or computing resources manages the actual transfer of data and not just the setup and status collection at a termination of the transfer.
  • the output coordinator 212 can utilize interface protocol commands, such as SATA vendor specific commands, PCIe, DMA, or Ethernet commands.
  • the storage devices 218 can send the in-storage processing outputs 224 to the output coordinator 212 in a number of ways.
  • the output coordinator 212 can utilize the DMA or PIO mechanisms.
  • the DMA can be a remote DMA (rDMA) whereby the transfer is a DMA process from memory of one computer (e.g. the computer running the application 202 ) into that of another (e.g. one of the storage devices 218 for the in-storage processing) without involving either one's operating system or processor intervention for the actual transfer.
  • the output coordinator 212 can utilize interface protocol processes, such as background SATA connection or Ethernet.
  • the storage devices 218 can send its respective in-storage processing outputs 224 or their locations to the application 202 . This can be accomplished without the in-storage processing outputs 224 passing through the output coordinator 212 .
  • the storage devices 218 and the application 202 can interact in a number of ways, such as DMA, rDMA, PIO, back SATA connection, or Ethernet.
  • the output coordinator 212 can manage the order of the outputs from the storage devices 218 .
  • the output management 704 manages the outputs based on multiple constraints, such as size of output, storage capacity of output coordinator 212 , and types of the application requests 220 of FIG. 2 .
  • the outputs can be the in-storage processing outputs 224 .
  • the output management 704 can order the outputs based on various policies.
  • the outputs or the in-storage processing outputs 224 for each of the sub-application requests 222 of FIG. 2 for in-storage processing can be stored in a sorted order by a sub-request identification 708 per a request identification 710 for the in-storage processing.
  • the request distributor 210 can transform the application request 220 of FIG. 2 into multiple sub-application requests 222 with the formatted data 216 and distributes them to the storage devices 218 .
  • the output coordinator 212 gathers the in-storage processing outputs 224 from each of the storage devices 218 .
  • the output coordinator 212 may need to preserve the issuing order of application requests 220 , the sub-application requests 222 , or a combination thereof even though the in-storage processing outputs 224 from the storage devices 218 can be delivered to the output coordinator 212 in an arbitrary order because the data processing time of the storage devices 218 can be different.
  • the storage group 206 of FIG. 2 can assign a sequence number to each of the in-storage processing outputs 224 , where each of the in-storage processing outputs 224 also can be composed of multiple sub-outputs. For these sub-outputs, the storage group 206 also assigns sequence numbers or sequence identifications.
  • the output coordinator 212 receives each of the in-storage processing outputs 224 or sub-output data from each of the storage devices 218 , it can maintain each output's sequence thereby sorting them by sequence numbers or identification. If the order of the in-storage processing outputs 224 or the sub-outputs is not important for application 202 , the output coordinator 212 can send the in-storage processing outputs 224 in an out of order manner.
  • the request identification 710 represents information that can be used to demarcate one of the application requests 220 from another.
  • the sub-request identification 708 represents information that can be used to demarcate one of the sub-application requests 222 from another.
  • the sub-request identification 708 can be unique or associated with a specific instance of the request identification 710 .
  • the sub-request identification 708 can be non-constrained to a specific instance of the request identification 710 .
  • the output coordinator 212 can include and output buffer 712 .
  • the output buffer 712 can store the in-storage processing outputs 224 from the storage devices 218 .
  • the output buffer 712 can be implemented in a number of ways.
  • the output buffer 712 can be a hardware implementation of a first-in first-out (FIFO) circuit or of a linked list structure.
  • the output buffer 712 can be implemented with memory circuitry with the software providing the intelligence for the FIFO operations, such as pointers, status flags, etc.
  • the outputs or the in-storage processing outputs 224 for each of the sub-application requests 222 can be added to the output buffer 712 .
  • the in-storage processing outputs 224 can be fetched from the output buffer 712 as long as the output for the desired instance of the sub-application requests 222 is in the output buffer 712 .
  • the sub-request identification 708 can be utilized to determine whether the associated in-storage processing output 224 has been stored in the output buffer 712 .
  • the request identification 710 can also be utilized, such as an initial determination.
  • the output coordinator 212 can collect the in-storage processing output 224 from the storage devices 218 . To guarantee the data integrity of the in-storage processing outputs 224 , the output coordinator 212 can maintain the sequence of each of the in-storage processing outputs 224 or sub-outputs data in a correct order. For this, the output coordinator 212 can utilize the sub-request identification 708 or the request identification 710 (e.g. if each of the in-storage processing output 224 of each of the storage devices 218 also reuses the same identification as their output sequence number or output sequence identification).
  • the output coordinator 212 can temporarily store each of the in-storage processing outputs 224 or sub-output data into output buffer 712 to make them all sequential (i.e., correct data order). If there exists any missing in-storage processing output 224 or sub-output (that is, a hole in the sequence IDs), the application 202 cannot get the output data until all the in-storage processing outputs 224 are correctly collected in the output buffer 712 .
  • the outputs or the in-storage processing outputs 224 for each of the sub-application requests 222 can be sent to the application 202 without passing through the output coordinator 212 or the output buffer 712 in the output coordinator 212 .
  • the in-storage processing outputs 224 can be sent from the storage devices 218 without being stored before reaching the application 202 .
  • the application 202 can retrieve the in-storage processing outputs 224 in a number of ways.
  • the output retrieval 706 can include the in-storage processing outputs 224 passing through the output coordinator 212 .
  • the output retrieval 706 can include the in-storage processing outputs 224 being sent to the application 202 without passing through the output buffer 712 .
  • the outputs or the in-storage processing outputs 224 can be passed from the storage devices 218 to the output coordinator 212 .
  • the output coordinator 212 can store the in-storage processing outputs 224 in the output buffer 712 .
  • the output coordinator 212 can then send the in-storage processing outputs 224 to the application 202 .
  • the outputs or the in-storage processing outputs 224 can be passed from the storage devices 218 to the output coordinator 212 .
  • the output coordinator 212 can send the in-storage processing outputs 224 to the request distributor 210 .
  • the request distributor 210 can send the in-storage processing outputs 224 to the application 202 .
  • the output buffer 712 can be within the output coordinator 212 , the request distributor 210 , or a combination thereof.
  • the outputs or the in-storage processing outputs 224 can be passed from the storage devices 218 to the application 202 . In this example, this transfer is direct without the in-storage processing outputs 224 to pass through the output coordinator 212 , the request distributor 210 , or a combination thereof.
  • the output coordinator 212 can be implemented in a number of ways.
  • the output coordinator 212 can be implemented in hardware circuitry, such as a processor, an application specific integrated circuit (ASIC) an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), FPGA, or a combination thereof.
  • the output coordinator 212 can implemented with software.
  • the output harvest 702 , the output management 704 , the output retrieval 706 , or a combination thereof can be implemented with hardware circuitry, with the examples noted earlier, or by software.
  • the request distributor 210 can be implemented in a number of ways.
  • the request distributor 210 can be implemented in hardware circuitry, such as a processor, an application specific integrated circuit (ASIC) an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), FPGA, or a combination thereof.
  • the output coordinator 212 can implemented with software.
  • FIGS. 8A and 8B therein are shown detailed examples of an operational view of the split and split+padding functions.
  • ISP in-storage processing
  • RAID functions include striping, mirroring, or a combination thereof
  • FIGS. 8A and 8B depict examples of the application data 214 and the application units 304 .
  • the application units 304 can be processed by the in-storage processing coordinator 204 of FIG. 2 .
  • FIGS. 8A and 8B each depicts one example.
  • FIG. 8A depicts the application data 214 undergoing the split function, similarly to the one described in FIG. 3 .
  • This depiction can also represent a striping function in a RAID application.
  • FIG. 8A this part depicts the formatted data 216 and the formatted units 306 .
  • the formatted data 216 is split and sent to two of the storage devices 218 .
  • Each of the formatted units 306 includes one or more of the application units 304 , such as FU 0 in DEV 1 , which can include AU 0 and AU 1 .
  • These application units 304 such as AU 0 , AU 1 , AU 2 , AU 4 , etc., can be each entirely contained in one of the formatted units 306 or traverse or span across multiple formatted units 306 , such as AU 3 , AU 6 , AU 8 , etc.
  • some of the application units 304 are aligned with the formatted units 306 while others are not.
  • FIG. 10 of the application units 304 there are shown 10 of the application units 304 being split into the formatted units 306 that are sent to two of the storage devices 218 .
  • application units 304 labeled as AU 1 , AU 3 , AU 5 , AU 6 , and AU 8 are not aligned.
  • These non-aligned application units 304 can be identified with in-storage processing and separately processed by host systems or cooperatively with other storage devices 218 . Therefore, the application requests 220 of FIG. 4 for in-storage processing can be serialized and more complex request coordination could be required.
  • FIG. 8B this part depicts the formatted data 216 and the formatted units 306 , as in FIG. 8A .
  • this example depicts the formatted data 216 being split in some form and sent to two of the storage devices 218 .
  • each of the application units 304 such as AU 0 , AU 1 , AU 2 , AU 3 , etc., can be aligned with one of the formatted units 306 with one of the data pads 402 , as similarly described in FIG. 4 .
  • the application units 304 is pre-processed and aligned by split+padding policy, allowing each of the application requests 220 for in-storage processing to be independent. This independence can maximize the opportunity for efficient, concurrent processing since no additional phase of processing is required for the formatted units 306 with the aligned application units 304 , compared with the non-aligned units.
  • the computing system 900 can be an embodiment of the computing system 100 of FIG. 1 .
  • FIG. 9 depicts the in-storage processing coordinator 904 in a centralized coordination model.
  • the in-storage processing coordinator 904 is separate from or external to the host computer 102 and the storage devices 218 .
  • the term separate and external represents that the in-storage processing coordinator 904 is in a separate system to the host computer 102 and the storage devices 218 can be housed separate system housing.
  • the host computer 102 can be executing the application 202 of FIG. 2 .
  • the host computer 102 can also provide file and object services.
  • the in-storage processing coordinator 904 can be included as part of the network 120 of FIG. 1 , the data storage system 101 of FIG. 1 , implemented external to the host computer 102 , or a combination thereof. As previously described in FIG. 2 and other figures earlier, the in-storage processing coordinator 904 can include the request distributor 910 , the data preprocessor 908 , and the output coordinator 912 .
  • each of the storage devices 218 performs the in-storage processing functions.
  • Each of the storage devices 218 can include an in-storage processing engine 922 .
  • the in-storage processing engine 922 can perform the in-storage processing for its respective storage device 218 .
  • the storage devices 218 can be located in a number of places within the computing system 100 .
  • the storage devices 218 can be located within the data storage system 101 of FIG. 1 , as part of the network 120 of FIG. 1 , the hard disk drive 116 of FIG. 1 or storage external to the host computer 102 , or as part of the network attached storage 122 of FIG. 1 .
  • the in-storage processing coordinator 904 can function with the storage devices 218 in a number of ways.
  • the storage devices 218 can be configured to support various functions, such as RAID 0, 1, 2, 3, 4, 5, 6, and object stores.
  • the in-storage processing engine 922 can be implemented in a number of ways.
  • in-storage processing engine 922 can be implemented with software, hardware circuitry, or a combination thereof.
  • hardware circuitry can include a processor, an application specific integrated circuit (ASIC) an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), FPGA, or a combination thereof.
  • ASIC application specific integrated circuit
  • FSM hardware finite state machine
  • DSP digital signal processor
  • FPGA field-programmable gate array
  • the computing system 1000 can be an embodiment of the computing system 100 of FIG. 1 .
  • FIG. 10 depicts the in-storage processing coordinator 1004 in a centralized coordination model.
  • the in-storage processing coordinator 1004 is internal to the host computer 102 .
  • the term internal represents that the in-storage processing coordinator 1004 is in the same system to the host computer 102 and is generally housed in the same system housing as the host computer 102 .
  • This embodiment also has the in-storage processing coordinator 1004 as a separate from or external to the storage devices 218 .
  • the host computer 102 can include the in-storage processing coordinator 1004 as well as the file object services.
  • the host computer 102 can execute the application 202 of FIG. 2 .
  • the in-storage processing coordinator 1004 can include the request distributor 1010 , the data preprocessor 1008 , and the output coordinator 1012 .
  • each of the storage devices 218 performs the in-storage processing function.
  • Each of the storage devices 218 can include an in-storage processing engine 1022 .
  • the in-storage processing engine 1022 can perform the in-storage processing for its respective storage device 218 .
  • the storage devices 218 can be located in a number of places within the computing system 100 .
  • the storage devices 218 can be located within the data storage system 101 of FIG. 1 , as part of the network 120 of FIG. 1 , the hard disk drive 116 of FIG. 1 or storage external to the host computer 102 or as part of the network attached storage 122 of FIG. 1 .
  • the in-storage processing coordinator 1004 can function with the storage devices 218 in a number of ways.
  • the storage devices 218 can be configured to support various functions, such as RAID 0, 1, 2, 3, 4, 5, 6, and object stores.
  • the in-storage processing engine 1022 can be implemented in a number of ways.
  • in-storage processing engine 1022 can be implemented with software, hardware circuitry, or a combination thereof.
  • Examples of hardware circuitry can include similar examples as in FIG. 9 . The functions for this embodiment will be described in detail later.
  • FIG. 11 therein is shown an example of an architecture view of the computing system 1100 in a yet further embodiment.
  • the computing system 1100 can be an embodiment of the computing system 100 of FIG. 1 .
  • FIG. 11 depicts the in-storage processing coordinator 1104 in a decentralized coordination model.
  • the in-storage processing coordinator 1104 is partitioned between the host computer 102 and the storage devices 218 . Additional examples of operational flow for this model are described in FIG. 15 and in FIG. 16 .
  • the in-storage processing coordinator 1104 can include the request distributor 1110 , the data preprocessor 1108 , or a combination thereof.
  • the data preprocessor 1108 and at least a portion of the request distributor 1110 are internal to the host computer 102 .
  • the term internal represents that the request distributor 1110 and the data preprocessor 1108 are in the same system to the host computer 102 and housed in the system housing as the host computer 102 .
  • this embodiment has the output coordinator 1112 and at least a portion of the request distributor 1110 separate or external to the host computer 102 .
  • this embodiment provides the output coordinator 1112 and at least a portion of the request distributor 1110 as internal to the storage devices 218 .
  • the host computer 102 can execute the application 202 of FIG. 2 .
  • each of the storage devices 218 performs the in-storage processing function.
  • Each of the storage devices 218 can include an in-storage processing engine 1122 .
  • the in-storage processing engine 1122 can perform the in-storage processing for its respective storage device 218 .
  • the storage devices 218 can be located in a number of places within the computing system 100 .
  • the storage devices 218 can be located within the data storage system 101 of FIG. 1 , as part of the network 120 of FIG. 1 , the hard disk drive 116 of FIG. 1 or storage external to the host computer 102 or as part of the network attached storage 122 of FIG. 1 .
  • this partition of the in-storage processing coordinator 1104 can function with the storage devices 218 in a number of ways.
  • the storage devices 218 can be configured to support various functions, such as RAID 1 and object stores.
  • the in-storage processing engine 1122 can be implemented in a number of ways.
  • in-storage processing engine 1122 can be implemented with software, hardware circuitry, or a combination thereof.
  • Examples of hardware circuitry can include similar examples as in FIG. 9 .
  • FIG. 12 therein is shown an example of an operational view of the computing system 100 for in-storage processing in a centralized coordination model.
  • FIG. 12 can represent embodiments for the centralized coordination model described from FIG. 9 or FIG. 10 .
  • FIG. 12 depicts the in-storage processing coordinator 204 and the interaction between the request distributor 210 and the data preprocessor 208 for the centralized coordination model.
  • FIG. 12 also depicts the output coordinator 212 .
  • FIG. 12 also depicts the in-storage processing coordinator 204 interacting with the storage devices 218 .
  • FIG. 12 depicts the in-storage processing coordinator 204 issuing the device requests 1202 for in-storage processing, such as write requests to the storage devices 218 .
  • the request distributor 210 can receive the application requests 220 of FIG. 2 for writing the application data 214 .
  • the request distributor 210 can also receive a data address 1204 as well as the transfer length 302 and a logical boundary 1206 of the application units 304 .
  • the data address 1204 can represent the address for the application data 214 .
  • the logical boundary 1206 represents the length or size of each of the application units 304 .
  • the request distributor 210 can send information to the data preprocessor 208 to translate the application data 214 to the formatted data 216 .
  • the request distributor 210 can also send the transfer length 302 for the application data 214 .
  • the application data 214 can be sent to the data preprocessor 208 as the application units 304 or the logical boundaries to the application units 304 .
  • the data preprocessor 208 can translate the application data 214 or the application units 304 to generate the formatted data 216 or the formatted units 306 of FIG. 3 . Examples of the types of translation can be one of the methods described in FIG. 2 and FIG. 3 through FIG. 6 .
  • the data preprocessor 208 can return the formatted data 216 or the formatted units 306 to the request distributor 210 .
  • the request distributor 210 can generate and issue device requests 1202 for writes to the storage devices 218 based on the formatting policies and policy for storing or for in-storage processing of the formatted data 216 or the formatted units 306 .
  • the device requests 1202 are based on the application requests 220 .
  • each of the storage devices 218 can include an in-storage processing function or application and the in-storage processing engine 922 .
  • Each of the storage devices 218 can receive the device requests 1202 and at least a portion of the formatted data 216 .
  • FIG. 12 depicts the device requests 1202 being issued to all of the storage devices 218 , it is understood that the request distributor 210 can operate differently.
  • the device requests 1202 can be issued to some of the storage devices 218 and not necessarily to all of them.
  • the device requests 1202 can be issued at different times or can be issued as part of the error handling examples as discussed in FIG. 2 .
  • the in-storage processing coordinator 204 can receive all the application requests 220 from the application 202 , can issue all the device requests 1202 to the storage devices 218 , or a combination thereof.
  • the request distributor 210 can send or distribute the device requests 1202 to multiple storage devices 218 based on a placement scheme.
  • the output coordinator 212 can collect and manage the in-storage processing outputs 224 from the storage devices 218 .
  • the output coordinator 212 can then send the in-storage processing outputs 224 to the application 202 of FIG. 2 as similarly described in FIG. 7 .
  • FIG. 13 therein is shown an example of an operational view of the computing system 1300 issuing data write requests to the storage devices 1318 for in-storage processing in a decentralized coordination model.
  • the computing system 1300 can include similarities to the computing system 1100 of FIG. 11 .
  • FIG. 13 depicts the in-storage processing coordinator 1304 including the request distributor 1310 and the data preprocessor 1308 .
  • FIG. 12 and FIG. 13 depict an example of an operational view of computing system 1300 in terms of storing data to the storage devices 218 . That is, both FIGS. 12 and 13 focus on how to efficiently store data across the storage devices 218 for in-storage processing.
  • FIG. 13 also depicts the output coordinator 1312 and a portion of the request distributor 1310 in each of the devices 1318 .
  • FIG. 13 also depicts the in-storage processing coordinator 1304 interacting with the devices 1318 .
  • FIG. 13 depicts the in-storage processing coordinator 1304 issuing the device requests 1302 as write requests to the devices 1318 .
  • the request distributor 1310 in the in-storage processing coordinator 1304 can receive the application requests 220 of FIG. 2 for writing the application data 214 of FIG. 2 .
  • the request distributor 1310 can also receive a data address 1204 as well as the transfer length 302 of FIG. 3 and the logical boundary of the application units 304 of FIG. 3 .
  • the data address 1204 can represent the address for the application data 214 .
  • the request distributor 1310 can send information to the data preprocessor 1308 to translate the application data 214 to the formatted data 216 of FIG. 2 .
  • the request distributor 1310 can also send the transfer length 302 for the application data 214 .
  • the application data 214 can be sent as the application units 304 or the logical boundaries to the application units 304 to the data preprocessor 1308 .
  • the data preprocessor 1308 can translate the application data 214 or the application units 304 to generate the formatted data 216 or the formatted units 306 of FIG. 3 .
  • Examples of the types of translation can be one of the methods described in FIG. 2 and FIG. 3 through FIG. 6 .
  • the data preprocessor 1308 can return the formatted data 216 or the formatted units 306 to the request distributor 1310 in the in-storage processing coordinator 1304 .
  • the request distributor 1310 can generate and issue the application requests 220 for writes to the devices 1318 based on the formatting policies and policy for storing or for in-storage processing of the formatted data 216 or the formatted units 306 .
  • each of the devices 1318 can include an in-storage processing function or application and the in-storage processing engine 1322 .
  • Each of the devices 1318 can receive the device requests 1302 and at least a portion of the formatted data 216 .
  • Each of the devices 1318 can also include the output coordinator 1312 , a portion of the request distributor 1310 , or a combination thereof.
  • FIG. 13 depicts the device requests 1302 being issued to all of the devices 1318 , it is understood that the request distributor 1310 can operate differently.
  • the device requests 1302 can be issued to some of the devices 1318 and not necessarily to all of them.
  • the device requests 1302 can be issued at different times or can be issued as part of the error handling examples as discussed in FIG. 2 .
  • the in-storage processing coordinator 1304 can receive the application requests 220 from the application 202 , can issue the device requests 1302 to the devices 1318 , or a combination thereof.
  • the request distributor 1310 in the in-storage processing coordinator 1304 can send or distribute the device requests 1302 to multiple devices 1318 based on a placement scheme.
  • the request distributor 1310 in each of the devices 1318 can receive the request from the in-storage processing coordinator 1304 .
  • the output coordinator 1312 can collect and manage the in-storage processing outputs 224 from the devices 1318 or one of the devices 1318 .
  • a decentralized coordination model there are various communication methods depending on the configuration of the storage group 206 .
  • the functions of the request distributor 1310 and the output coordinator 1312 in the devices 1318 in a decentralized coordination model will be described later.
  • FIG. 14 depicts the in-storage processing coordinator 904 to be external to both the host computer 102 and the storage devices 218 .
  • the application 202 is shown outside of the host computer 102 , it is understood that the application 202 can be executed by the host computer 102 as well as outside of the host computer 102 .
  • the in-storage processing coordinator 904 is external to the host in FIG. 14 , it is also understood that the in-storage processing coordinator 904 can be internal to the host, like in FIG. 10 .
  • FIG. 14 , FIG. 15 , and FIG. 16 depict an example of an operational view of computing system 1300 of FIG. 13 in terms of processing data in the storage devices 218 . That is, FIGS. 14, 15, and 16 focus on how to efficiently process/compute the stored data in the storage devices 218 with in-storage processing techniques.
  • the application 202 can issue application requests 220 for in-storage processing to the host computer 102 .
  • the host computer 102 can issue host requests 1402 based on the application requests 220 from the application 202 .
  • the host requests 1402 can be sent to the in-storage processing coordinator 904 .
  • the in-storage processing coordinator 904 can translate the application data 214 of FIG. 2 and the application units 304 of FIG. 3 to generate the formatted data 216 of FIG. 2 and the formatted units 306 of FIG. 3 .
  • the in-storage processing coordinator 904 can also generate the device requests 1202 to the storage devices 218 .
  • the in-storage processing coordinator 904 can also collect and manage the in-storage processing outputs 224 from the storage devices 218 , and can deliver an aggregated output 1404 back to the host computer 102 , the application 202 , or a combination thereof.
  • the aggregated output 1404 is the combination of the in-storage processing outputs 224 from the storage devices 218 .
  • the aggregated output 1404 can be more than concatenation of the in-storage processing outputs 224 .
  • the in-storage processing coordinator 904 can include the request distributor 910 .
  • the request distributor 910 can receive the application requests 220 as the host requests 1402 .
  • the request distributor 910 can generate the device requests 1202 from the host requests 1402 .
  • the request distributor 910 can also generate the sub-application requests 222 of FIG. 7 as the device requests 1202 .
  • the in-storage processing coordinator 904 can include the data preprocessor 908 .
  • the data preprocessor 908 can receive the information from the application requests 220 or the host requests 1402 through the request distributor 910 .
  • the data preprocessor 908 can format the application data 214 as appropriate based on the placement scheme onto the storage devices 218 .
  • the in-storage processing coordinator 904 can include the output coordinator 912 .
  • the output coordinator 912 can receive the in-storage processing outputs 224 from the storage devices 218 .
  • the output coordinator 912 can generate the aggregated output 1404 with the in-storage processing outputs 224 .
  • the output coordinator 912 can return the aggregated output 1404 to the host computer 102 .
  • the host computer 102 can also return the aggregated output 1404 to the application 202 .
  • the application 202 can continue to execute and utilize the in-storage outputs 224 , the aggregated output 1404 , or a combination thereof.
  • each of the storage devices 218 includes the in-storage processing engine 922 .
  • the in-storage processing engine 922 can receive and operate on specific instance of the device requests 1202 .
  • the in-storage processing engine 922 can generate in-storage processing output 224 to be returned to the in-storage processing coordinator 904 or as a specific example to the output coordinator 912 .
  • FIG. 15 therein is shown an operational view for a computing system 1500 in a decentralized model in an embodiment with one output coordinator 1512 .
  • the computing system 1500 can be the computing system 1100 of FIG. 11 .
  • the host computer 102 can issue an application request 220 to the storage devices 218 for in-storage processing.
  • the host computer 102 and the storage devices 218 can be similarly partitioned as described in FIG. 11 .
  • Each of the storage devices 218 can perform the in-storage processing.
  • Each of the storage devices 218 can provide its in-storage processing output 224 to the storage device 218 that received the application request 220 from the host computer 102 .
  • This storage device 218 can then return an aggregated output 1504 back host computer 102 , the application 202 , or a combination thereof.
  • the application 202 can continue to execute and utilize the in-storage outputs 224 , the aggregated output 1504 , or a combination thereof.
  • the application request 220 can be issued to one of the storage devices 218 . That one storage device 218 can issue the application request 220 or a device request 1202 to the other storage devices 218 .
  • the storage device 218 that received the application request 220 can decompose the application request 220 to partition the in-storage processing to the other storage devices 218 .
  • the device request 1202 can be that partitioned request based off the application request 220 and the in-storage processing execution by the previous storage devices 218 .
  • This example depicts a number of the devices labeled as “DEV_ 1 ”, “DEV_ 2 ”, “DEV_ 3 ”, and through “DEV_N”.
  • the term “N” in the figure is an integer.
  • the storage devices 218 in this example can perform in-storage processing. Each of the storage devices 218 are shown including an in-storage processing engine 1522 , a data preprocessor 1508 , and an output coordinator 1512 .
  • all of the storage devices 218 are shown with the output coordinator 1512 , although it is understood that the computing system 1500 can partitioned differently.
  • only one of the storage devices 218 can include the output coordinator 1512 .
  • the output coordinator 1512 in each of the storage devices 218 can operate differently from another.
  • the output coordinator 1512 in DEV_ 2 through DEV_N can act as pass through to the next storage device 218 or to return the in-storage processing output 224 back to DEV_ 1 .
  • Each of the storage devices 218 can manage it request identification 710 of FIG. 7 , the sub-request identification 708 of FIG. 7 , or a combination thereof.
  • the host computer 102 can send the application request 220 to one of the storage devices 218 labeled DEV_ 1 .
  • the in-storage processing engine 1522 in DEV_ 1 can perform the appropriate level of in-storage processing and generates the in-storage processing output 224 .
  • the in-storage processing output 224 from DEV_ 1 can be referred to as a first output 1524 .
  • the data preprocessor 1508 in DEV_ 1 can format or translate the information from the application request 220 that will be forwarded to DEV_ 2 , DEV_ 3 , and through to DEV_N.
  • the in-storage processing engine 1522 in DEV_ 2 can generate the in-storage processing output 224 and can be referred to a second output 1526 .
  • the output coordinator 1512 in the DEV_ 2 can send the second output 1528 to DEV_ 1 .
  • the in-storage processing engine 1522 in DEV_ 3 can generate the in-storage processing output 224 and can be referred to a third output 1528 .
  • the output coordinator 1512 in the DEV_ 3 can send the third output 1528 to DEV_ 1 .
  • the in-storage processing engine 1522 in DEV_N can generate the in-storage processing output 224 and can be referred to an Nth output.
  • the output coordinator 1512 in the DEV_N can send the Nth output to DEV_ 1 .
  • the output coordinator 1512 in DEV_ 1 generates the aggregated output 1504 that includes the first output 1524 , the second output 1526 , the third output 1528 , and through the Nth output.
  • FIG. 16 therein is shown an operational view for a computing system 1600 in a decentralized model in an embodiment with multiple output coordinators 1612 .
  • the computing system 1600 can be the computing system 1100 of FIG. 11 .
  • the host computer 102 can issue an application request 220 to storage devices 218 for in-storage processing.
  • the host computer 102 and the storage devices 218 can be similarly partitioned as described in FIG. 11 .
  • the application request 220 can be issued to one of the storage devices 218 . That storage device 218 then performs the in-storage processing.
  • the execution of the application request 220 and the in-storage processing results is issued or sent to another of the storage devices 218 . This process can continue until all the storage devices 218 performed the in-storage processing and the last of the storage devices 218 can return the result to the first of the storage devices 218 .
  • That first of the storage devices 218 then returns an aggregated output 1604 back host computer 102 , the application 202 , or a combination thereof.
  • the application 202 can continue to execute and utilize the in-storage outputs 224 of FIG. 2 , the aggregated output 1604 , or a combination thereof.
  • DEV_ 1 providing the aggregated output 1604 to the host computer 102
  • DEV_N the last device or DEV_N in this example can provide the aggregated output 1604 back to the host computer 102 instead of DEV_ 1 .
  • This example depicts a number of the storage devices 218 labeled as “DEV_ 1 ”, “DEV_ 2 ”, “DEV_ 3 ”, and through “DEV_N”.
  • the term “N” in the figure is an integer.
  • the storage devices 218 in this example can perform in-storage processing.
  • Each of the storage devices 218 are shown including an in-storage processing engine 1622 , a data preprocessor 1608 , and an output coordinator 1612 .
  • all of the storage devices 218 are shown with the output coordinator 1612 , although it is understood that the computing system 1600 can partitioned differently.
  • only one of the storage devices 218 can include the output coordinator 1612 with full functionality.
  • the output coordinator 1612 in each of the storage devices 218 can operate differently from another.
  • the output coordinator 1612 in DEV_ 2 through DEV_N can act as pass through to the next storage device 218 or to return the aggregated output 1604 back to DEV_ 1 .
  • the host computer 102 can send the application request 220 to one of the storage devices 218 labeled DEV_ 1 .
  • the in-storage processing engine 1622 in DEV_ 1 can perform the appropriate level of in-storage processing and can generate the in-storage processing output 224 .
  • the in-storage processing output 224 from DEV_ 1 can be referred to a first output 1624 .
  • the DEV_ 1 can decompose the application request 220 to partition the in-storage processing to DEV_ 2 .
  • the device request 1202 of FIG. 12 can be that partitioned request based off the application request 220 and the in-storage processing execution DEV_ 1 . This process of decomposing and partitioning can continue through DEV_N.
  • the data preprocessor 1608 in DEV_ 1 can format or translate the information from the application request 220 that will be forwarded to DEV_ 2 .
  • the data preprocessor 1608 in DEV_ 1 can also format or translate the in-storage processing output 224 from DEV_ 1 or the first output 1624 .
  • the output coordinator 1612 in DEV_ 1 can send the output of the data preprocessor 1608 in DEV_ 1 , the first output 1624 , a portion of the application request 220 , or a combination thereof to DEV_ 2 .
  • DEV_ 2 can continue the in-storage processing of the application request 220 sent to DEV_ 1 .
  • the in-storage processing engine 1622 in DEV_ 2 can perform the appropriate level of in-storage processing based on the first output 1624 and can generate the in-storage processing output 224 from DEV_ 2 .
  • the in-storage processing output 224 from DEV_ 2 can be referred to a second output 1626 as “a partial aggregated output.”
  • the data preprocessor 1608 in DEV_ 2 can format or translate the information from the application request 220 or the second output 1626 that will be forwarded to DEV_ 3 .
  • the data preprocessor 1608 in DEV_ 2 can also format or translate the in-storage processing output 224 from DEV_ 2 or the second output 1626 .
  • the output coordinator 1612 in DEV_ 2 can send the output of the data preprocessor 1608 in DEV_ 2 , the second output 1626 , a portion of the application request 220 , or a combination thereof to DEV_ 3 .
  • DEV_ 3 can continue the in-storage processing of the application request 220 sent to DEV_ 1 .
  • the in-storage processing engine 1622 in DEV_ 3 can perform the appropriate level of in-storage processing based on the second output 1626 and can generate the in-storage processing output 224 from DEV_ 3 .
  • the in-storage processing output 224 from DEV_ 3 can be referred to a third output 1628 .
  • the data preprocessor 1608 in DEV_ 3 can format or translate the information from the application request 220 or the third output 1628 that will be forwarded to DEV_ 1 .
  • the data preprocessor 1608 in DEV_ 2 an also format or translate the in-storage processing output 224 from DEV_ 3 or the third output 1628 .
  • the output coordinator 1612 in DEV_ 3 can send the output of the data preprocessor 1608 in DEV_ 3 , the third output 1628 , a portion of the application request 220 , or a combination thereof to DEV_ 1 .
  • DEV_ 1 can return to the host computer 102 or the application 202 the aggregated output 1604 based on the first output 1624 , the second output 1626 , and the third output 1628 .
  • in-storage processing by one of the storage devices 218 that follows a previous storage device 218 can aggregate the in-storage processing outputs 224 of the storage devices 218 that preceded it.
  • the second output 1626 is an aggregation of the in-storage processing output 224 from the DEV_ 2 as well as the first output 1624 .
  • the third output 1628 is an aggregation of the in-storage processing output from DEV_ 3 as well as the second output 1626 .
  • FIG. 17 therein is shown an example of a flow chart for the request distributor 210 and the data preprocessor 208 .
  • the request distributor 210 and the data preprocessor 208 can be operated in a centralized or decentralized model as described earlier, as examples.
  • this flow chart depicts how the application data 214 of FIG. 2 can be translated to the formatted data 216 of FIG. 2 based on the storage policies.
  • the storage policies can include the split policy, the split+padding policy, the split+redundancy policy, and storage without any chunking of the application units 304 of FIG. 3 to the formatted units 306 of FIG. 3 .
  • This example can represent the application request 220 of FIG. 2 as a write request.
  • the request distributor 210 of FIG. 2 can receive the application request 220 directly or some form the application request 220 through the host computer 102 of FIG. 1 .
  • the application request 220 can include information such as the data address 1204 of FIG. 12 , the application data 214 , the transfer length 302 of FIG. 3 , the logical boundary 1206 of FIG. 12 , or a combination thereof.
  • the request distributor 210 can execute a chunk comparison 1702 .
  • the chunk comparison 1702 compares the transfer length with a chunk size 1704 of the storage group 206 , in this example operating as a RAID system.
  • the chunk size 1704 represents a discrete unit of storage size to be stored in the storage devices 218 of FIG. 2 in the storage group 206 of FIG. 2 .
  • the chunk size 1704 can represent the size of one of the formatted units 306 .
  • the handling of the application request 220 can continue to a boundary query 1706 . If the chunk comparison 1702 determines that the transfer length is not greater than the chunk size 1704 , the handling of the application request 220 can continue to a device selection 1708 .
  • the branch of the flow chart starting with the device selection 1708 represents the handling of the application data 214 without chunking of the application units 304 or the application data 214 .
  • An example of this can be the mirroring function as described in FIG. 6 .
  • the device selection 1708 determines which of the storage devices 218 in the storage group 206 will store the application data 214 as part of the application request 220 .
  • the request distributor 210 can generate the device requests 1202 of FIG. 12 as appropriate based on the application request 220 .
  • the request distributor 210 can distribute the application request 220 by splitting the application request 220 to sub-application requests 222 of FIG. 2 or by sending identical application requests 220 to multiple storage devices 218 .
  • each of the sub-application requests 222 can make the size of each of the sub-application requests 222 to be a multiple of the logical boundary 1206 of the application units 304 .
  • the sub-application requests 222 can be the device requests 1202 issued to the storage devices 218 .
  • multiple storage devices 218 can receive these application requests 220 .
  • the first in-storage processing output 224 of FIG. 2 returned can be accepted by the output coordinator 212 of FIG. 2 to be returned back to the application 202 .
  • the identical application requests 220 can be the device requests 1202 issued to the storage devices 218 .
  • the request distributor 210 can split the application request 220 to the sub-application requests 222 .
  • These sub-application requests 222 make the size of each of these requests to be an arbitrary length.
  • the requests can be handled as a split function by the data preprocessor 208 .
  • the sub-application requests 222 can be the device requests 1202 issued to the storage devices 218 .
  • the request distributor 210 , the data preprocessor 208 , or a combination thereof can continue from the device selection 1708 to an address calculation 1710 .
  • the address calculation 1710 can calculate the address for the application data 214 or the formatted data 216 to be stored in the storage devices 218 receiving the device requests 1202 .
  • the address calculation 1710 is described being performed by the request distributor 210 or the data preprocessor 208 , although it is understood that the address calculation 1710 can be performed elsewhere.
  • the storage devices 218 receiving the device requests 1202 can perform the address calculation 1710 .
  • the address can be a pass-through from the application request 220 in which case the address calculation 1710 could have been performed by the application 202 of FIG. 2 or by the host computer 102 .
  • the flow chart can continue to a write non-chunk function 1712 .
  • Each of the storage devices 218 receiving the device request 1202 can write the application data 214 or the formatted data 216 on the storage device 218 . Since each of the storage devices 218 contain the application data 214 in a complete or non-chunked form, any of the application data 214 or the formatted data 216 can undergo in-storage processing by the storage device 218 with the application data 214 .
  • the boundary query 1706 determines if the logical boundary 1206 is provided in the application request 220 , as an example. If the boundary query 1706 determines that the logical boundary 1206 is provided, the flow chart can continue to a padding query 1714 . If the boundary query 1706 determines that the logical boundary 1206 is not provided, the flow chart can continue to a normal RAID query 1716 .
  • the branch of the flow chart starting with the normal RAID query 1716 represents the handling of the application data 214 with chunking of the application units 304 (or some of the application units 304 ).
  • An example of this can be the split function described in FIG. 3 .
  • this branch of the flow chart can be used for unstructured application data 214 or for application data 214 with no logical boundary 1206 .
  • the chunk size 1704 can be with a fixed size or a variable-length size.
  • the normal RAID query 1716 determines if the application request 220 is for a normal RAID function as the in-storage processing, or not. If so, the flow chart can continue to a chunk function 1718 . If not the flow chart can continue to another portion of the flow chart or can return an error status back to the application 202 .
  • the chunk function 1718 can split the application data 214 or the application units 304 or some portion of them in the chunk size 1704 for the storage devices 218 to receive the application data 214 .
  • the data preprocessor 208 can perform the chunk function 1718 to generate the formatted data 216 or the formatted units 306 with the application data 214 translated to the chunk size 1704 .
  • the data preprocessor 208 can interact with the request distributor 210 to issue the device requests 1202 to the storage devices 218 .
  • the chunk function 1718 is described as being performed by the data preprocessor 208 , although it is understood that the chuck function 1718 can be executed differently.
  • the storage devices 218 receiving the device requests 1202 can perform the chunk function 1718 as part of the in-storage processing at the storage devices 218 .
  • the flow chart can continue to a write chunk function 1719 .
  • the write chunk function 1719 is an example of the in-storage processing at the storage devices 218 .
  • the write chunk function 1719 writes the formatted data 216 or the formatted units 306 at the storage devices 218 receiving the device requests 1202 from the request distributor 210 .
  • the branch below the padding query 1714 represents the handling of the application data 214 or the application units 304 or a portion thereof with the data pads 402 .
  • An example of this can be the split+padding function as described in FIG. 4 .
  • the padding query 1714 determines if the application data 214 or the application units 304 or some portion of them should be padded to generate the formatted data 216 or the formatted units 306 .
  • the data preprocessor 208 can perform the padding query 1714 .
  • the application data sizing 1720 calculates a data size 1722 of the application data 214 for the split—padding function.
  • the data size 1722 is the amount of the application data 214 to be partitioned for the formatted data 216 .
  • the application data sizing 1720 can determine the data size 1722 for the amount of the application unit 304 or multiple application units 304 for each of the formatted units 306 .
  • each of the formatted units 306 are of the chunk size 1704 and the data size 1722 is per chunk.
  • the data size 1722 can calculated with Equation 1 below.
  • the data size is calculated with the floor function of the chunk size 1704 divided by the logical boundary 1206 .
  • the result of the floor function is then multiplied by the logical boundary 1206 to generate the data size 1722 .
  • the flow chart can continue to a pad sizing 1724 .
  • the pad sizing 1724 calculates a pad size 1726 for the data pads 402 for each of the formatted units 306 .
  • the pad size 1726 can be calculated with Equation 2 below.
  • pad size 1726 chunk size 1704 ⁇ data size 1722 (Equation 2)
  • the pad size 1726 per chunk or per each of the formatted units 306 can be calculated with the chunk size 1704 subtracted by the data size 1722 per chunk or per each of the formatted units 306 .
  • the flow chart can continue to a chunk number calculation 1728 .
  • the chunk number calculation 1728 determines a chunk number 1730 or the number of the formatted units 306 needed for the application data 214 .
  • the chunk number 1730 can be used to determine the size or length of the formatted data 216 .
  • the data preprocessor 208 can perform the chunk number calculation 1728 .
  • the flow chart can continue to a split function 1732 .
  • the split function 1732 partitions the application data 214 to the data size 1722 for each of the formatted units 306 .
  • the split function 1732 is part of generating the formatted data 216 where the application units 304 are aligned with the chunk size 1704 or the formatted units 306 .
  • the data preprocessor 208 can perform the split function 1732 .
  • the flow chart can continue to a write pad function 1734 .
  • the write pad function 1734 performs the in-storage processing of writing the formatted data 216 with the application data 214 partitioned to the data size 1722 and with the data pads 402 .
  • the data pads 402 can include additional information, such as parity, metadata, synchronization fields, or identification fields.
  • the request distributor 210 can send the device requests 1202 to the storage devices 218 to perform the write pad function 1734 of the formatted data 216 .
  • the flow chart can continue to a redundancy query 1736 .
  • this branch of the flow chart represents the redundancy function.
  • the redundancy function is described in FIG. 6 .
  • the flow chart can continue from the redundancy query 1736 to the application data sizing 1720 .
  • FIG. 17 depicts the application data sizing 1720 under the redundancy query 1736 to be a separate function from the application data sizing 1720 under the padding query 1714 , although it is understood that the two functions can perform the same operations and can also be the same function.
  • the application data sizing 1720 under the redundancy query 1736 can be computed using the expression found in Equation 1 described earlier.
  • the flow chart can continue to a chunk function 1718 .
  • the chunk function 1718 splits or partitions the application data 214 to the formatted data 216 as described in FIG. 6 .
  • the data preprocessor 208 can perform the chunk function 1718 .
  • FIG. 17 depicts the chunk function 1718 under the normal RAID query 1716 to be a separate function from the chunk function 1718 under the redundancy query 1736 , although it is understood that the two functions can perform the same operations and can also be the same function.
  • the flow chart can continue to a redundancy function 1738 .
  • the redundancy function 1738 copies that application data 214 that is in the range of the data size 1722 and the chunk size 1704 to additional chunks to generate the replica data 602 of FIG. 6 .
  • the flow chart can continue to a write redundancy function 1740 .
  • the write redundancy function writes formatted data 216 including the application data 214 and the replica data 602 .
  • the request distributor 210 as issue device requests 1202 to the storage devices 218 to perform the write redundancy function 1740 .
  • the flow chart can continue to the normal RAID query 1716 .
  • the flow chart is described with the split+padding function separately from the redundancy function, although it is understood that the flow chart can provide a different operation.
  • the flow chart can be arranged to provide the split+redundancy function as described in FIG. 5 .
  • this can be accomplished with the redundancy query 1736 being placed before the write pad function 1734 .
  • the redundancy function 1738 above could be modified to operate only on the non-aligned application units 304 to form the aligned data 504 of FIG. 5 as opposed to the replica data 602 .
  • the modified redundancy function can be followed by a further write function.
  • the further write function would combine portions of the write pad function 1734 and the write redundancy function 1740 .
  • the write pad function 1734 be utilize a portion of the formatted data 216 with the data pads 402 and the write redundancy function 1740 can write the aligned data 504 as opposed to the replica data 602 .
  • the centralized embodiment can be the computing system 900 of FIG. 9 or the computing system 1000 of FIG. 10 .
  • the decentralized embodiment can be the computing system 1100 of FIG. 11 .
  • the flow chart on the left-hand side of FIG. 18 represents an example of a flow chart for a centralized embodiment.
  • the flow chart on the right-hand side of FIG. 18 represents an example of a flow chart for a decentralized embodiment.
  • the request distributor 210 of FIG. 2 can receive the application request 220 of FIG. 2 .
  • the application request 220 can include the data address 1204 of FIG. 12 , the application data 214 of FIG. 2 , and the transfer length 302 of FIG. 3 .
  • the data preprocessor 208 of FIG. 2 can execute a replica query 1802 .
  • the replica query 1802 determines if the replica data 602 of FIG. 6 should be created or not. As an example, the replica query 1802 can make this determines by comparing if a number 1804 of replica data 602 being requested is greater than zero. If so, the flow chart can continue to a create replica 1806 . If not, the flow chart can continue to the device selection 1708 .
  • the device selection 1708 can be the same function or perform the same or similar function as described in FIG. 17 .
  • the flow chart can continue to the address calculation 1710 .
  • the address calculation 1710 can be the same function or perform the same or similar function as described in FIG. 17 .
  • the flow chart can continue to the write non-chunk function 1712 .
  • the write non-chunk function 1712 can be the same function or perform the same or similar function as described in FIG. 17 .
  • the request distributor 210 can execute the device selection 1708 , the address calculation 1710 , or a combination thereof include the outputs of these operations as part of the device request 1202 of FIG. 12 .
  • the write non-chunk function 1712 can be performed by one of the storage devices 218 to store the application data 214 .
  • the flow chart can continue to the create replica 1806 .
  • the replica query 1802 can make this determination when the number 1804 of replica sought is greater than zero.
  • the create replica 1806 can generate the replica data 602 from the application data 214 .
  • the replica data 602 can be as described in FIG. 6 .
  • the data preprocessor 208 can perform the create replica 1806 .
  • the create replica 1806 can generate the number 1804 of the replica data 602 as needed and not just one.
  • the flow chart can continue to a prepare replica 1808 .
  • the request distributor 210 can prepare each of the replica data 602 for the device selection 1708 .
  • the replica data 602 can be written to the storage devices 218 following the flow chart from the device selection 1708 , as already described.
  • the request distributor 210 can receive the application request 220 .
  • the application request 220 can include the data address 1204 , the application data 214 , the transfer length 302 , and the number 1804 of the replica data 602 .
  • the request distributor 210 can send one of the device requests 1202 to one of the storage devices 218 . That storage device 218 can perform the address calculation 1710 .
  • the address calculation 1710 can be the same function or perform the same or similar function as described in FIG. 17 and as for the centralized embodiment.
  • the same storage device 218 can also perform the write non-chunk function 1712 .
  • the write non-chunk function 1712 can be the same function or perform the same or similar function as described in FIG. 17 and as for the centralized embodiment.
  • the flow chart can continue to the replica query 1802 .
  • the replica query can be the same function or perform the same or similar function as described for the centralized embodiment. If the number 1804 for the replica data 602 is not greater than zero, the process to write additional data stops for this particular application request 220 .
  • the flow chart can continue a group selection 1810 .
  • the group selection 1810 can select one of the storage devices 218 in the same replica group 1812 .
  • the replica group 1812 is a portion of the storage devices 218 of FIG. 2 in the storage group 206 of FIG. 2 designated to be part of a redundancy function for the application data 214 and for in-storage processing.
  • the request distributor 210 can perform the replica query 1802 , the group selection 1810 , or a combination thereof.
  • the flow chart can continue to a number update 1814 .
  • the number update 1814 can decrement the number 1804 for replica data 602 still to be written to the replica group 1812 .
  • the decrement amount can be by an integer value, such as one.
  • the request distributor 210 can perform the number update 1814 .
  • the flow chart can continue to a request generation 1816 .
  • the request generation 1816 generates one of the device requests 1202 to another of the storage devices 218 in the replica group 1812 for writing the replica data 602 .
  • the request distributor 210 can perform the request generation 1816 .
  • the flow chart can loop back (not drawn in FIG. 18 ) to the replica query 1802 and iterate until the number 1804 has reached zero. At this point, the replica data 602 has been written to the replica group 1812 .
  • the decentralized embodiment is described as operating in a serial manner writing to one of the storage devices 218 at a time, although it is understood that the decentralized embodiment can operate differently.
  • the request distributor 210 can issue a number of device requests 1202 to the storage devices 218 in the replica group 1812 and have the replica data 602 written on multiple storage devices 218 simultaneously before the other storage devices 218 in the replica group completes the write.
  • the computing system provides efficient distributed processing by providing methods and apparatuses for performing in-storage processing with multiple storage devices, with capabilities for performing in-storage processing of application data.
  • An execution of an application can be shared by distributing the execution among various devices in a storage device.
  • Each of the devices can perform in-storage processing with the application data as requested by an application request.
  • the computing system can reduce overall system power consumption by reducing the number of inputs/outputs between the application execution and the storage device. This reduction is achieved by having the devices perform the in-storage processing instead of mere storage, read, and re-store by the application. Instead, the in-storage processing outputs can be returned as an aggregated output from the various devices that performed the in-storage processing back to the application. The application can continue to execute and utilize the in-storage outputs, the aggregated output, or a combination thereof.
  • the computing system provides for reduced total cost of ownership by providing formatting and translation function of the application data for different configuration or organization of the storage device. Further, the computing system also provides translation for the type of in-storage processing to be carried out by the devices in the storage device. Examples of types of translation or formatting include split, split+padding, split+redundancy, and mirroring.
  • the computing system provides more efficient execution of the application with less interrupts to the application via the output coordination of the in-storage processing outputs from the storage devices.
  • the output coordination can buffer the in-storage processing outputs and can also sort the order of each of the in-storage processing outputs before returning an aggregated output to the application.
  • the application can continue to execute and utilize the in-storage outputs, the aggregated output, or a combination thereof.
  • the computing system further minimizes integration obstacles by allowing the devices in the storage group to have different or the same functionalities.
  • one of the devices can function as the only output coordinator for all the in-storage processing outputs from the other devices.
  • the aggregation function can be distributed amongst the devices passing along and performing partial aggregation from device to device until one of the devices returns the full aggregated output back to the application.
  • the application can continue to execute and utilize the in-storage outputs, the aggregated output, or a combination thereof.
  • the modules described in this application can be hardware implementations or hardware accelerators in the computing system 100 .
  • the modules can also be hardware implementation or hardware accelerators within the computing system 100 or external to the computing system 100 .
  • the modules described in this application can be implemented as instructions stored on a non-transitory computer readable medium to be executed by the computing system 100 .
  • the non-transitory computer medium can include memory internal to or external to the computing system 100 .
  • the non-transitory computer readable medium can include non-volatile memory, such as a hard disk drive, non-volatile random access memory (NVRAM), solid-state storage group (SSD), compact disk (CD), digital video disk (DVD), or universal serial bus (USB) flash memory devices.
  • NVRAM non-volatile random access memory
  • SSD solid-state storage group
  • CD compact disk
  • DVD digital video disk
  • USB universal serial bus
  • the method 1900 includes: performing in-storage processing with a storage device with formatted data based on application data from an application in a block 1902 ; and returning an in-storage processing output from the storage device to the application for continued execution in a block 1904 .
  • the method 1900 can further include receiving a sub-application request at the storage device based on an application request from the application for performing in-storage processing.
  • the method 1900 can further include sorting in-storage processing outputs from a storage group including the storage device.
  • the method 1900 can further include issuing a device request based on an application request from the application to a storage group including the storage device.
  • the method 1900 can further include issuing a device request from the storage device; receiving the device request at another storage device; generating another device request by the another storage device; and receiving the another device request by yet another storage device
  • the method 1900 can further include sending in-storage processing outputs by a storage group include the storage device to be aggregated and sent to the application.
  • the method 1900 can further include aggregating an in-storage processing output as a partial aggregated output to be returned to the application.
  • the method 1900 can further include generating the formatted data based on the application data.
  • the method 1900 can further include generating a formatted unit of the formatted data with an application unit of the application data and a data pad.
  • the method 1900 can further include generating a formatted unit of the formatted data with non-aligned instances of application units of the application data and a data pad.

Abstract

A computing system includes: a storage device configured to perform in-storage processing with formatted data based on application data from an application; and return an in-storage processing output to the application for continued execution.

Description

    CROSS-REFERENCE TO RELATED APPLICATION(S)
  • This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/098,530 filed Dec. 31, 2014, and the subject matter thereof is incorporated herein by reference thereto.
  • TECHNICAL FIELD
  • Embodiments relate generally to a computing system, and more particularly to a system with distribute compute-enabled storage.
  • BACKGROUND
  • Modern consumer and industrial electronics, such as computing systems, servers, appliances, televisions, cellular phones, automobiles, satellites, and combination devices, are providing increasing levels of functionality to support modern life. These devices are more interconnected. Storage of information is becoming more of a necessity.
  • Research and development in the existing technologies can take a myriad of different directions. Storing information locally or over a distributed network is becoming more important. Processing efficiency and inputs/outputs between storage and computing resources are more problematic as the amount of data, computation, and storage increases.
  • Thus, a need still remains for a computing system with distributed compute-enabled storage group for ubiquity of storing and retrieving information regardless of the source of data or the request for the data, respectively. In view of the ever-increasing commercial competitive pressures, along with growing consumer expectations and the diminishing opportunities for meaningful product differentiation in the marketplace, it is increasingly critical that answers be found to these problems. Additionally, the need to reduce costs, improve efficiencies and performance, and meet competitive pressures adds an even greater urgency to the critical necessity for finding answers to these problems.
  • Solutions to these problems have been long sought but prior developments have not taught or suggested any solutions and, thus, solutions to these problems have long eluded those skilled in the art.
  • SUMMARY
  • An embodiment provides an apparatus, including: a storage device configured to perform in-storage processing with formatted data based on application data from an application; and return an in-storage processing output to the application for continued execution.
  • An embodiment provides a method including: performing in-storage processing with a storage device with formatted data based on application data from an application; and returning an in-storage processing output from the storage device to the application for continued execution.
  • Certain embodiments of the invention have other steps or elements in addition to or in place of those mentioned above. The steps or elements will become apparent to those skilled in the art from a reading of the following detailed description when taken with reference to the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a computing system with distributed compute-enabled storage group in an embodiment of the present invention.
  • FIG. 2 is an example of an architectural view of a computing system with a distributed compute-enabled storage device.
  • FIG. 3 is an example of an operational view for a split function of the data preprocessor.
  • FIG. 4 is an example of an operational view for a split+padding function of the data preprocessor.
  • FIG. 5 is an example of an operational view for a split+redundancy function of the data preprocessor.
  • FIG. 6 is an example of an operational view for a mirroring function of the data preprocessor.
  • FIG. 7 is an example of an architectural view of the output coordinator.
  • FIGS. 8A and 8B are detailed examples of an operational view of the split and split+padding functions.
  • FIG. 9 is an example of an architectural view of the computing system in an embodiment.
  • FIG. 10 is an example of an architectural view of the computing system in a further embodiment.
  • FIG. 11 is an example of an architectural view of the computing system in a yet further embodiment.
  • FIG. 12 is an example of an operational view of the computing system issuing device requests for in-storage processing in a centralized coordination model.
  • FIG. 13 is an example of an operational view of the computing system issuing device requests for in-storage processing in a decentralized coordination model.
  • FIG. 14 is an operational view for the computing system of a centralized coordination model.
  • FIG. 15 is an operational view for a computing system in a decentralized model in an embodiment with one output coordinator
  • FIG. 16 is an operational view of a computing system in a decentralized model in an embodiment with multiple output coordinators.
  • FIG. 17 is an example of a flow chart for the request distributor and the data preprocessor.
  • FIG. 18 is an example of a flow chart for a mirroring function for centralized and decentralized embodiments.
  • FIG. 19 is a flow chart of a method of operation of a computing system in an embodiment of the present invention.
  • DETAILED DESCRIPTION
  • Various embodiments provide a computing system for efficient distributed processing by providing methods and apparatus for performing in-storage processing with multiple storage devices with capabilities for performing in-storage processing of the application data. An execution of an application can be shared by distributing the execution among various storage devices in a storage device. Each of the storage devices can perform in-storage processing with the application data as requested by an application request.
  • Various embodiments provide a computing system to reduce overall system power consumption by reducing the number of inputs/outputs between the application execution and the storage device. This reduction is achieved by having the storage devices perform in-storage processing instead of mere storage, read, and re-store by the application. Instead, the in-storage processing outputs can be returned as an aggregated output from the various storage devices that performed the in-storage processing, back to the application. The application can continue to execute and utilize the in-storage outputs, the aggregated output, or a combination thereof.
  • Various embodiments provide a computing system that reduces total cost of ownership by providing formatting and translation functions for the application data for different configurations or organizations of the storage device. Further, the computing system also provides translation for the in-storage processing to be carried out by the various storage devices as part of the storage group. Examples of types of translation or formatting include split, split+padding, split+redundancy, and mirroring.
  • Various embodiments provide a computing system that also minimizes integration by allowing the storage devices to handle more of the in-storage processing coordination functions, with less being done by the host execution the application. Another embodiment allows for the in-storage processing coordination to increasingly be located and operate outside of both the host and the storage devices.
  • Various embodiments provide a computing system with more efficient execution of the application with less interrupts to the application by coordinating the outputs of the in-storage processing from the storage devices. The output coordination can buffer the in-storage processing outputs and can also sort the order of each of the in-storage processing outputs before returning an aggregated output to the application. The application can continue to execute and utilize the in-storage outputs, the aggregated output, or a combination thereof.
  • Various embodiments provide a computing system further minimizing integration obstacles by allowing the storage devices in the storage group to have different or the same functionalities. As an example, one of the storage devices can function as the only output coordinator for all the in-storage processing outputs from the other storage devices. As a further example, the aggregation function can be distributed amongst the storage devices, passing along from storage device to storage device and performing partial aggregation at each storage device, until a final one of the storage devices returns the full aggregated output back to the application. The application can continue to execute and utilize the in-storage outputs, the aggregated output, or a combination thereof.
  • The following embodiments are described in sufficient detail to enable those skilled in the art to make and use the invention. It is to be understood that other embodiments may be evident based on the present disclosure, and that system, process, architectural, or mechanical changes can be made to the embodiments as examples without departing from the scope of the present invention.
  • In the following description, numerous specific details are given to provide a thorough understanding of the invention. However, it will be apparent that the invention and various embodiments may be practiced without these specific details. In order to avoid obscuring an embodiment of the present invention, some well-known circuits, system configurations, and process steps are not disclosed in detail.
  • The drawings showing embodiments of the system are semi-diagrammatic, and not to scale and, particularly, some of the dimensions are for the clarity of presentation and are shown exaggerated in the drawing figures. Similarly, although the views in the drawings for ease of description generally show similar orientations, this depiction in the figures is arbitrary for the most part. Generally, an embodiment can be operated in any orientation.
  • The term “module” referred to herein can include software, hardware, or a combination thereof in an embodiment of the present invention in accordance with the context in which the term is used. For example, the software can be machine code, firmware, embedded code, application software, or a combination thereof. Also for example, the hardware can be circuitry, processor, computer, integrated circuit, integrated circuit cores, a pressure sensor, an inertial sensor, a microelectromechanical system (MEMS), passive devices, or a combination thereof. Additional examples of hardware circuitry can be digital circuits or logic, analog circuits, mixed-mode circuits, optical circuits, or a combination thereof. Further, if a module is written in the apparatus claims section below, the modules are deemed to include hardware circuitry for the purposes and the scope of apparatus claims.
  • The modules in the following description of the embodiments can be coupled to one another as described or as shown. The coupling can be direct or indirect without or with, respectively, intervening between coupled items. The coupling can be physical contact or by communication between items.
  • Referring now to FIG. 1, therein is shown a computing system 100 with a data protection mechanism in an embodiment of the present invention. The computing system 100 is depicted in FIG. 1 as a functional block diagram of the computing system 100 with a data storage system 101. The functional block diagram depicts the data storage system 101 installed in a host computer 102.
  • Various embodiments can include the computing system 100 with devices for storage, such as a solid state disk 110, a non-volatile memory 112, hard disk drives 116, memory devices 117, and network attached storage 122. These devices for storage can include capabilities to perform in-storage processing, that is, to independently perform relatively complex computations at a location outside of a traditional system CPU. As part of the in-storage processing paradigm, various embodiments of the present inventive concept manage the distribution of data, the location of data, and the location of processing tasks for in-storage processing. Further, these in-storage computing enabled storage devices can be grouped or clustered into arrays. Various embodiments manage the allocation of data and/or processing based on the architecture and capabilities of these devices or arrays. In-storage processing is further explained later.
  • As an example, the host computer 102 can be as a server or workstation. The host computer 102 can include at least a central processing unit 104, host memory 106 coupled to the central processing unit 104, and a host bus controller 108. The host bus controller 108 provides a host interface bus 114, which allows the host computer 102 to utilize the data storage system 101.
  • It is understood that the function of the host bus controller 108 can be provided by central processing unit 104 in some implementations. The central processing unit 104 can be implemented with hardware circuitry in a number of different manners. For example, the central processing unit 104 can be a processor, an application specific integrated circuit (ASIC) an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), a field programmable gate array (FPGA), or a combination thereof.
  • The data storage system 101 can be coupled to a solid state disk 110, such as a non-volatile memory based storage group having a peripheral interface system, or a non-volatile memory 112, such as an internal memory card for expanded or extended non-volatile system memory.
  • The data storage system 101 can also be coupled to hard disk drives (HDD) 116 that can be mounted in the host computer 102, external to the host computer 102, or a combination thereof. The solid state disk 110, the non-volatile memory 112, and the hard disk drives 116 can be considered as direct attached storage (DAS) devices, as an example.
  • The data storage system 101 can also support a network attach port 118 for coupling to a network 120. Examples of the network 120 can include a personal area network (PAN), a local area network (LAN), a storage area network (SAN), a wide area network (WAN), or a combination thereof. The network attach port 118 can provide access to network attached storage (NAS) 122. The network attach port 118 can also provide connection to and from the host bus controller 108.
  • While the network attached storage 122 are shown as hard disk drives, this is an example only. It is understood that the network attached storage 122 could include any non-volatile storage technology, such as magnetic tape storage (not shown), storage devices similar to the solid state disk 110, non-volatile memory 112, or hard disk drives 116 that are accessed through the network attach port 118. Also, the network attached storage 122 can include aggregated resources, such as just a bunch of disks (JBOD) systems or redundant array of intelligent disks (RAID) systems as well as other network attached storage 122.
  • The data storage system 101 can be attached to the host interface bus 114 for providing access to and interfacing to multiple of the direct attached storage (DAS) devices via a cable 124 for storage interface, such as Serial Advanced Technology Attachment (SATA), the Serial Attached SCSI (SAS), or the Peripheral Component Interconnect—Express (PCI-e) attached storage devices.
  • The data storage system 101 can include a storage engine 115 and memory cache 117. The storage engine 115 can be implemented with hardware circuitry, software, or a combination thereof in a number of ways. For example, the storage engine 115 can be implemented as a processor, an application specific integrated circuit (ASIC), an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), FPGA, or a combination thereof.
  • The central processing unit 104 or the storage engine 115 can control the flow and management of data to and from the host computer 102, and from and to the direct attached storage (DAS) devices, the network attached storage 122, or a combination thereof. The storage engine 115 can also perform data reliability check and correction, which will be further discussed later. The storage engine 115 can also control and manage the flow of data between the direct attached storage (DAS) devices and the network attached storage 122 and amongst themselves. The storage engine 115 can be implemented in hardware circuitry, a processor running software, or a combination thereof.
  • For illustrative purposes, the storage engine 115 is shown as part of the data storage system 101, although the storage engine 115 can be implemented and partitioned differently. For example, the storage engine 115 can be implemented as part of in the host computer 102, implemented in software, implemented in hardware, or a combination thereof. The storage engine 115 can be external to the data storage system 101. As examples, the storage engine 115 can be part of the direct attached storage (DAS) devices described above, the network attached storage 122, or a combination thereof. The functionalities of the storage engine 115 can be distributed as part of the host computer 102, the direct attached storage (DAS) devices, the network attached storage 122, or a combination thereof. The central processing unit 104 or some portion of it can also be in the data storage system 101, the direct attached storage (DAS) devices, the network attached storage 122, or a combination thereof.
  • The memory devices 117 can function as a local cache to the data storage system 101, the computing system 100, or a combination thereof. The memory devices 117 can be a volatile memory or a nonvolatile memory. Examples of the volatile memory can include static random access memory (SRAM) or dynamic random access memory (DRAM).
  • The storage engine 115 and the memory devices 117 enable the data storage system 101 to meet the performance requirements of data provided by the host computer 102 and store that data in the solid state disk 110, the non-volatile memory 112, the hard disk drives 116, or the network attached storage 122.
  • For illustrative purposes, the data storage system 101 is shown as part of the host computer 102, although the data storage system 101 can be implemented and partitioned differently. For example, the data storage system 101 can be implemented as a plug-in card in the host computer 102, as part of a chip or chipset in the host computer 102, as partially implement in software and partially implemented in hardware in the host computer 102, or a combination thereof. The data storage system 101 can be external to the host computer 102. As examples, the data storage system 101 can be part of the direct attached storage (DAS) devices described above, the network attached storage 122, or a combination thereof. The data storage system 101 can be distributed as part of the host computer 102, the direct attached storage (DAS) devices, the network attached storage 122, or a combination thereof.
  • Referring now to FIG. 2, therein is shown an architectural view of a computing system 100 with a distributed compute-enabled storage device. The architectural view can depict an example of relationships between some parts in the computing system 100. As an example, the architectural view can depict the computing system 100 to include an application 202, an in-storage processing coordinator 204, and a storage group 206.
  • As an example, the storage group 206 can be partitioned in the computing system 100 of FIG. 1 in a number of ways. For example, the storage group 206 can be part of or distributed among the data storage system 101 of FIG. 1, the hard disk drives 116 of FIG. 1, the network attached storage 122 of FIG. 1, the solid state disk 110 of FIG. 1, the non-volatile memory 112 of FIG. 1, or a combination thereof.
  • The application 202 is a process executing a function. The application 202 can provide an end-user (not shown) function or other functions related to the operation, control, usage, or communication of the computing system 100. As an example, the application 202 can be a software application executed by a processor, a central processing unit (CPU), a programmable hardware state machine, or other hardware circuitry that can execute software code from the software application. As a further example, the application 202 can be a function executed purely in hardware circuitry, such as logic gates, finite state machine (FSM), transistors, or a combination thereof. The application 202 can execute on the central processing unit 104 of FIG. 1.
  • The in-storage processing coordinator 204 manages the communication and activities between the application 202 and the storage group 206. The in-storage processing coordinator 204 can manage the operations between the application 202 and the storage group 206. As an example, the in-storage processing coordinator 204 can translate information between the application 202 and the storage group 206. Also for example, the in-storage processing coordinator 204 can—direct information flow and assignments between the application 202 and the storage group 206. As an example, the in-storage processing coordinator 204 can include a data preprocessor 208, a request distributor 210, and an output coordinator 212.
  • As an example, the in-storage processing coordinator 204 or portions of it can be executed by the central processing unit 104 or other parts of the host computer 102. The in-storage processing coordinator 204 or portions of it can also be executed by the data storage system 101. As a specific example, the storage engine 115 of FIG. 1 can execute the in-storage processing coordinator 204 or portions of it. The hard disk drives 116 of FIG. 1, the network attached storage 122 of FIG. 1, the solid state disk 110 of FIG. 1, the non-volatile memory 112 of FIG. 1, or a combination thereof can execute the in-storage processing coordinator 204 or portions of it.
  • The data preprocessor 208 performs data formatting of application data 214 and placement of formatted data 216. The application data 214 is the information or data generated by the application 202. The formatting can enable storing the application data 214 as formatted data 216 across multiple storage devices 218 for in-storage processing (ISP) to be stored in the storage group 206.
  • In-storage processing refers to the processing or manipulation of the formatted data 216 to be sent back to the application 202 or the system executing the application 202. The in-storage processing is more than mere storing and retrieval of the formatted data 216. Examples of the manipulation or processing as part of the in-storage processing can include integer or floating point math operations, Boolean operations, reorganization of data bits or symbols, or a combination thereof. Other examples of manipulating or processing as part of the in-storage processing can include search, sort, compares, filtering, combining the formatted data 216, the application data 214, or a combination thereof.
  • As a further example, the data preprocessor 208 can format the application data 214 from the application 202 and generate the formatted data 216 to be processed outside or independent from execution of the application 202. This independent processing can be performed with the in-storage processing. The application data 214 can be independent of and not necessarily the same format as those stored in the storage group 206. The format of the application data 214 can be different than the formatted data 216, which will be described later.
  • Depending on the type of the application data 214, array configurations of the storage group 206, or other user-defined policies, the application data 214 can be processed in various ways. As an example, the policies can refer to availability requirements so as to affect the array configuration, such as mirroring, of the storage group 206. As a further example, the policies can refer to performance requirements as to further affect the array configuration, such as striping, of the storage group 206.
  • As examples of translation, the application data 214 can be translated to the formatted data 216 using various methods, such as split, split+padding, split+redundancy, and mirroring. These methods can create independent data sets of the formatted data 216 that can be distributed to multiple storage devices 218, allowing for concurrent in-storage processing. The concurrent in-storage processing refers to each of the storage devices 218 in the storage group 206 being able to independently process or operate on the formatted data 216, the application data 214, or a combination thereof. This independent processing or operation can be independent of the execution of the application 202, the other storage devices 218 of the storage group 206 that received some of the formatted data 216 from the application data 214, or a combination thereof.
  • The request distributor 210 manages application requests 220 between the application 202 and the storage group 206. As a specific example, the request distributor 210 accepts the application requests 220 from the application 202 and distributes them. The application requests 220 are actions between the application 202 and the storage group 206 based on the in-storage processing. For example, the application requests 220 can provide information from the application 202 to be off-loaded to the storage group 206 for in-storage processing. Furthering the example, the results of the in-storage processing can be returned to the application 202 based on the application requests 220.
  • As an example, the request distributor 210 manages the application requests 220 from the application 202 for in-storage processing, for write or storage, or for output. The request distributor 210 also distributes the application requests 220 from the application 202 across the multiple storage devices 218 in the storage group 206.
  • As another example, incoming application requests 220 for in-storage processing can be split into multiple sub-application requests 222 to perform in-storage processing according to a distribution of the formatted data 216, organization of the storage group 206, or other policies. The request distributor 210 can perform this split of the application request 220 for in-storage processing based on the placement scheme for the application data 214, the formatted data 216, or a combination thereof.
  • Example types of data placement schemes include a centralized scheme and decentralized scheme, discussed from FIGS. 9 to 11. In various embodiments in a centralized scheme, the data preprocessor 208 is placed inside the in-storage processing coordinator 204, while a decentralized model places the data preprocessor 208 inside the storage group 206.
  • For the embodiments with a centralized scheme, once the in-storage processing coordinator 204 receives an application request 220 such as a data write request with required information, as an example, address, data, data length, and a logical boundary, from the application 202, the request distributor 210 provides the data preprocessor 208 with the required information such as data, data length, and logical boundary. Then, the data preprocessor 208 partitions the data into multiple data chunks of an appropriate size based on the store unit information. Then, the request distributor 210 distributes the corresponding data chunks to each of the storage devices 218 with multiple sub-application requests 222. The storage group 206, the storage devices 218, or a combination thereof can receive the application requests 220, the sub-application requests 222, or a combination thereof. On the other hand, the request distributor 210 in a decentralized model divides the data into a predefined size of chunks, for instance, data size/N, where N is the number of storage devices, and then distributes the chunks of data into each of the storage devices 218 with sub-application requests 222 combined with the required information such as address, data length, and a logical boundary. Then, the data preprocessor 208 inside storage devices 206 partitions the assigned data into smaller chunks based on the store unit information.
  • As a further specific example, for a write request for the application data 214, given the application data 214 to be written, its length, and an optional logical boundary of the application data 214, the request distributor 210 can send the write request to the data preprocessor 208 so that it can determine how to distribute the application data 214. Once data distribution is determined, the request distributor 210 can issue the write request to the storage devices 218 in the storage group 206. The host bus controller 108 of FIG. 8 or the network attach port 118 of FIG. 1 can be used to execute the request distributor 210 and issue the application requests 220.
  • Continuing with the example, the storage devices 218 can perform the in-storage processing on the formatted data 216. The request distributor 210 can process the request for output by forwarding the output request to the in-storage processing coordinator 204, or as a specific example to the output coordinator 212, to send in-storage processing outputs 224 back to the application 202 or the system executing the application 202. The application can continue to execute with the in-storage processing outputs 224. The in-storage processing outputs 224 can be the results of the in-storage processing by the storage group 206 of the formatted data 216. The in-storage processing outputs 224 are not a mere read-back or read of the formatted data 216 stored in the storage group 206.
  • The output coordinator 212 can manage processed data generated from each of the multiple storage devices 218 of the storage group 206 and can send it back to the application 202. As an example, the output coordinator 212 collects the results or the in-storage processing outputs 224 and provides them to the application 202 or various applications 202 or the system executing the application 202. The output coordinator 212 will be described later.
  • The computing system 100 also can provide error handling capabilities. For example, when one or more of the storage devices 218 in the storage group 206 become inaccessible or has a slower performance, the application requests 220 can fail, such as time-outs or non-completions. For better availability, the computing system 100 can perform a number of actions.
  • The following are examples for the application requests 220 for writes to the storage group 206. The in-storage processing coordinator 204, or as a more specific example the request distributor 210, can maintain a request log that can be used to issue retries for the application requests 220 that failed or were not completed. Also as an example, the in-storage processing coordinator 204 can keep retrying the application requests 220 to write the application data 214. As a further example, the in-storage processing coordinator 204 can report that status of the application requests 220 to the application 202.
  • The following are examples for the application requests 220 for in-storage processing at the storage group 206. If one of the storage devices 218 in the storage group 206 includes a replica of the application data 214, the formatted data 216, or a combination thereof as to the storage device 218 that was inaccessible, these application requests 220 can be redirected to the storage device 218 with the replica. If error recovery is possible, the error recovery process can be executed prior to the previous failed application requests 220 being reissued to the recovered storage device 218. An example of the error recovery technique can be a redundant array of inexpensive disk (RAID) recovery with rebuilding a storage device 218 that has been striped. As other examples, the in-storage processing coordinator 204 can try the application requests 220 that previously failed. The in-storage processing coordinator 204 can also generate reports of failures even if the application requests 220 are redirected, retried, and even eventually successful.
  • The in-storage processing coordinator 204 or at least a portion of it can be implemented in a number of ways. As an example, the in-storage processing coordinator 204 can be implemented with software, hardware circuitry, or a combination thereof. Examples of hardware circuitry can include a processor, an application specific integrated circuit (ASIC) an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), FPGA, or a combination thereof.
  • Referring now to FIG. 3, there is shown an example of an operational view for a split function of the data preprocessor 208 of FIG. 2. FIG. 3 depicts that application data 214 as input to the data preprocessor 208 or more generally the in-storage processing coordinator 204 of FIG. 2. FIG. 3 depicts one example method of the data formatting performed by the data preprocessor 208 as mentioned in FIG. 2. In this example, the data formatting is a split function or a split scheme. FIG. 3 also depicts the formatted data 216 as the output of the data preprocessor 208.
  • In this example, the amount of the application data 214 is shown to span a transfer length 302. The transfer length 302 refers to the amount of data or information sent by the application 202 to the data preprocessor 208 or vice versa. The transfer length 302 can be a fixed size or variable depending on what the application 202 transfers for in-storage processing.
  • Also in this example, the application data 214 can include application units 304. The application units 304 are fields within or portions of the application data 214. Each of the application units 304 can be fixed in size or can be variable. As an example, the application units 304 can represent partitioned portion or chunks of application data 214.
  • As an example, the size of each of the application units 304 can be the same across the application data 214. Also as an example, the size of each of the application units 304 across the application data 214 can differ for different transfers of the application data 214. Further for example, the size of each of the application units 304 can vary within the same transfer or across transfers. The application units 304 can also vary in size depending on the different applications 202 sending the application data 214. The number of application units 304 can vary or can be fixed. The number of application units 304 can vary for the same application 202 sending the application data 214 or between different applications 202.
  • FIG. 3 also depicts the formatted data 216 as the output of the data preprocessor 208. The formatted data 216 can include formatted units 306 (FUs). The formatted units 306 are fields within that formatted data 216. In this example, each of the formatted units 306 can be fixed in size or can be variable. The size of the formatted units 306 can be the same for the formatted data 216 or different for transfers of the formatted data 216, or can vary within the same transfer or across transfers. The formatted units 306 can also vary in size depending on the different applications 202 sending the formatted data 216. The number of the formatted units 306 can vary or can be fixed. The number of the formatted units 306 can vary for the same application 202 sending the formatted data 216 or between different applications 202.
  • FIG. 3 depicts the formatted data 216 after a split formatting with the formatted units 306 overlaid visually with the application units 304. An example storage application for this split formatting or split scheme can be with redundant array of inexpensive disk (RAID) systems as the storage group 206, or with at least some of the multiple storage devices 218 in the storage group 206. The in-storage processing, or even the mere storage of the application data 214, can at least involve splitting the application units 304 to different destination devices in the storage group 206.
  • Continuing with this example, the data preprocessor 208 can split the application data 214 into a predefined fixed-length blocks referred to as the formatted units 306 and can give each block to one or more of the multiple storage devices 218 of the in-storage processing in a round robin fashion, as an example. The split scheme can generate non-aligned data sets between the application data 214 and the formatted data 216. As a specific example, the data preprocessor 208 can generate the non-alignment between the application units 304 relative to the boundaries for the formatted units 306.
  • Further with this example, FIG. 3 depicts an alternating formatting or allocation of the application units 304 to the different devices in the storage group 206. In this example, the application units 304 are depicted as “Data 1, “Data 2”, “Data 3”, and through “Data K”. The formatted units 306 are depicted as “FU 1”, “FU 2”, and through “FU N”.
  • As a specific example, the formatted data 216 can have alternating instances targeted for one device or another device in the storage group 206. In other words, for example, odd numbered “FUs” can be for drive 1 and even numbered “FUs” can be for drive 0. The overlay of the application units 304 as “Data” is shown as not aligned with the boundaries of the “FU” and FIG. 3 depicts “Data 2” and “Data K” being split between FU 1 (drive 1) and FU 2 (drive 0), again for this example.
  • As a further example, the formatted data 216 can also be stored on one of the storage device 218 as opposed to being partitioned or allocated to different instances of the storage devices 218 in the storage group 206. In this example, the formatted units 306 can be sized for a sector based on a physical block address or a logical block address on one of the storage devices 218 as a hard disk drive or a solid state disk drive.
  • As a specific example for the split function, the request distributor 210 can initially send up to N in-storage processing application requests 220 to the storage devices 218 in the storage group 206. The term N is an integer number. The application units 304 that are not aligned with the formatted units 306, such as “Data 2” and “Data K” in this figure and example, can undergo additional processing at the storage devices 218 with in-storage processing.
  • For example, the non-aligned application units 304 can be determined after initial processing of the application data 214 by the host computer 102 of FIG. 1, the request distributor 210, or other storage devices 218. The non-aligned application units 304 can by fetched by the host computer 102 or the request distributor 210 allowing the non-aligned application units 304 to be concurrently processed by the host computer 102, the request distributor 210, the storage devices 218 in the storage group 206, or a combination thereof. The non-aligned application units 304 can also be fetched by the host computer 102 or the request distributor 210 such that these non-aligned application units 304 can be written back to the devices for in-storage processing. Each of the storage devices 218 can send the results of the processed non-aligned application units 304 to the host computer 102, the request distributor 210, or the other storage devices 218 so the host computer 102 or the other storage devices 218 can continue to process the application data 214.
  • Referring now to FIG. 4, therein is shown an example of an operational view for a split+padding function of the data preprocessor 208 of FIG. 2. FIG. 4 depicts the application data 214 as input to the data preprocessor 208 as similarly described in FIG. 3. FIG. 4 also depicts the formatted data 216 as the output of the data preprocessor 208. FIG. 4 depicts one example method of the data formatting performed by the data preprocessor 208 as mentioned in FIG. 2. In this example, the data formatting is a split+padding function or split+padding scheme.
  • In this example, the split+padding function by the data preprocessor 208 adds data pads 402 to align the application units 304 to the formatted units 306. The alignment of the application units 304 and the formatted units 306 can allow the request distributor 210 of FIG. 2 to send up to K independent in-storage processing application requests 220 to multiple storage devices 218 of FIG. 2 in the storage group 206 of FIG. 2. The term K is an integer. In other words, the alignment allows for each of the multiple storage devices 218 to perform in-storage processing of the application units 304, the formatted units 306, or a combination thereof independently without requiring further formatting or processing required on the formatted data 216.
  • As a specific example, each of the formatted units 306 includes one of the application units 304 plus one of the data pads 402. Each of the data pads 402 aligns each of the application units 304 to the boundaries of each of the formatted units 306. The data pads 402 can also provide other functions or include other information. For example, the data pads 402 can include error detection or error correction information, such as parity, ECC protection, meta-data, etc.
  • The data pads 402 can be placed or located in a number of different locations within the formatted units 306. For example, one of the data pads 402 can be located at the end of one of the application units 304 as shown in FIG. 4. Also for example, each of the data pads 402 can also be located at the beginning of each of the formatted units 306 and before each of the application units 304. Further for example, each of the data pads 402 can be distributed, uniformly or non-uniformly, across each of the formatted units 306 and within each of the application units 304.
  • As an example, the size of each of the data pads 402 can depend on the difference in size between each of the application units 304 and each of the formatted units 306. The data pads 402 can be the same for each of the formatted units 306 or can vary. Further for example, the term size can refer to the number of bits or symbols for the formatted units 306, the application units 304, or a combination thereof. The term size can also refer to the transmission time, recording time, or a combination thereof for the formatted units 306, the application units 304, or a combination thereof.
  • In this example, the size of the application data 214 is shown to span the transfer length 302 as similarly described in FIG. 3. In this example, the application units 304 are depicted as “Data 1”, “Data 2″”, “Data 3”, and through “Data K”. In this example, the formatted units 306 are depicted as “FU 1”, “FU 2”, and through “FU N”.
  • FIG. 4 depicts the formatted data 216 after a split+padding formatting with the formatted units 306 overlaid visually with the application units 304 and with the data pads 402. An example storage application for this split+padding formatting or scheme can be with use of redundant array of inexpensive disk (RAID) systems as the storage group 206 or with at least some of the multiple storage devices 218 in the storage group 206. The in-storage processing or even the mere storage of the application data 214 can at least involve splitting+padding the application units 304 to different destination devices in the storage group 206.
  • Continuing with this example, the data preprocessor 208 can split the application data 214 with the data pads 402 into a predefined length, and gives each length to one or more of the storage devices 218 for in-storage processing. A length can include any number of the application units 304. As a specific example, the formatted data 216 of any length can be targeted for one or more of the multiple storage devices 218 in the storage group 206.
  • Referring now to FIG. 5, therein is shown an example of an operational view for a split+redundancy function of the data preprocessor 208 of FIG. 2. FIG. 5 depicts the application data 214 as input to the data preprocessor 208 as similarly described in FIG. 3. The application data 214 can include the application units 304.
  • FIG. 5 also depicts the formatted data 216 as the output of the data preprocessor 208 as similarly described in FIG. 3. The formatted data 216 can include the formatted units 306.
  • In this example, the split+redundancy function can process the aligned and non-aligned application units 304. The in-storage processing in each of the storage devices 218 of FIG. 2 in the storage group 206 of FIG. 2 can process the aligned application units 304 separately, the non-aligned application units 304 separately, or both at the same time.
  • In this example, the data preprocessor 208 is performing the split+redundancy function or the split+redundancy scheme. As part of this function, the split+padding function can split the application data 214 to formatted data 216 of fixed length, variable length, or a combination thereof.
  • Also part of the split+redundancy function is the redundancy function. For the redundancy function as the example, the data preprocessor 208 does not necessarily need to manipulate the application data 214, the application units 304, or a combination thereof that are non-aligned to the formatted units 306 as the split function described in FIG. 3. This is depicted as the first row of the formatted data 216 in FIG. 5 and is redundancy data 502. The formatted data 216 generated from the split+redundancy function includes the redundancy data 502.
  • As an example, the redundancy data 502 can be an output of the data preprocessor 208 mapping the application data 214, or as a more specific example the application units 304 to the formatted data 216 and across the formatted units 306 even with some of the application units 304 nonaligned with the formatted units 306. In other words, some of the application units 304 fall within the boundary of one of the formatted units 306 and these application units 304 are considered aligned. Other instances of the application units 304 traverses multiple instances of the formatted units 306 and these application units 304 are considered nonaligned. As a specific example, the application units 304 depicted as “Data 2” and “Data K” each span across two and adjacent instances of the formatted units 306.
  • Also as an example, the split+redundancy function can also perform the split+padding function to some of the application units 304. The data preprocessor 208 can store the application units 304 that are not aligned to the formatted units 306. This is depicted in the second row of the formatted data 216 of FIG. 5 and is an aligned data 504. For these particular, non-aligned application units 304, the data preprocessor 208 can perform the split+padding function as described in FIG. 4 to form the aligned data 504. In the example depicted in FIG. 5, the application units 304Data 2” and “Data K” are not aligned to or traverses multiple instances of the formatted units 306. The aligned data 504 generated by the data preprocessor 208 includes the data pads 402 to these instances of the nonaligned application units 304 in the redundancy data 502.
  • In this example, the split+redundancy function allows the in-storage processing coordinator 204 to send up to N+M requests to the storage devices 218 in the storage group 206. Both N and M are integers. N represents the number of formatted units 306 in the redundancy data 502. M represents the additional formatted units 306 in the aligned data 504. For the in-storage processing in each of the storage devices 218, the non-aligned application units 304 in the redundancy data 502 can be ignored.
  • Referring now to FIG. 6, therein is shown an example of an operational view for a mirroring function of the data preprocessor 208 of FIG. 2. FIG. 6 depicts the formatted data 216 as the output of the data preprocessor 208 as similarly described in FIG. 3. The formatted data 216 of FIG. 2 can include the formatted units 306 of FIG. 3. The application data 214 of FIG. 3 can be processed by the data preprocessor 208.
  • When the application data 214 is mirrored in this example, at least some of the storage devices 218 of FIG. 2 can receive all of the application data 214, which are replicated, or also referred to as mirrored. The application data 214 that are replicated are referred to as replica data 602. FIG. 6 depicts the multiple storage devices 218 as “Device 1” through “Device r” for the replica data 602. Replicated units 604 are the application units 304 of FIG. 3 that are replicated and are shown as “Data 1”, “Data 2”, “Data 3” through “Data K” on “Device 1” through “Device r”. One of the storage devices 218 can store the application data 214 as the formatted data 216 for that storage device 218. Some of the other storage devices 218 can store the replica data 602 and the replicated units 604.
  • In this example, the data preprocessor 208 does not manipulate the application units 304 or the application data 214 as a whole. However, the data preprocessor 208 can collect or store mirroring information and the application units 304. Also, the in-storage processing coordinator 204 can receive the application data 214 or the application units 304 from the application 202 when processing for efficient, concurrent in-storage processing.
  • The in-storage processing coordinator 204 or the data preprocessor 208 can perform the mirroring functions in a number of ways. As an example, the in-storage processing coordinator 204 or the data preprocessor 208 can take into account factors for mirroring the application data 214 to the formatted data 216. One factor is the number of target devices from the multiple storage devices 218. Another factor is the size of the application data 214, the application units 304 of FIG. 3, or a combination thereof. A further factor is the size of the formatted data 216, the formatted units 306, or a combination thereof.
  • Referring now to FIG. 7, therein is shown an example of an architectural view of the output coordinator 212. As noted earlier, the output coordinator 212 manages the in-storage processing outputs 224 generated from each of the multiple storage devices 218 of the storage group 206 and sends it back to the application 202. The output coordinator 212 can manage the interaction with the application 202 in a number of ways.
  • As an example, the output coordinator 212 function can be described as an output harvest 702, an output management 704, and an output retrieval 706. The output harvest 702 is a process for collecting the in-storage processing outputs 224. For example, the output harvest 702 can collect the in-storage processing outputs 224 from each of the storage devices 218 and store them. The storage can be done locally where the output harvest 702 is being executed. Also for example, the output harvest 702 can collect the locations of the in-storage processing outputs 224 in each of the storage devices 218.
  • The following are examples of various embodiments of how the output coordinator 212, or as a specific example the output harvest 702, can collect the in-storage processing outputs 224 from the storage devices 218. As an example, the output coordinator 212 can fetch the in-storage processing outputs 224 or their locations from each of the storage devices 218 that performed the in-storage processing of the application data 214 of FIG. 2, the formatted data 216 of FIG. 2, or a combination thereof.
  • As an example, the output coordinator 212 can fetch the in-storage processing outputs 224 in a number of ways. For example, the output coordinator 212 can utilize a direct memory access (DMA) with the storage devices 218. DMA transfers are transfer mechanisms not requiring a processor or a computing resource to manage the actual transfer once the transfer is setup. As another example, the output coordinator 212 can utilized a programmed input/output (PIO) with the storage devices 218. PIO transfers are transfer mechanism where a processor or computing resources manages the actual transfer of data and not just the setup and status collection at a termination of the transfer. As a further example, the output coordinator 212 can utilize interface protocol commands, such as SATA vendor specific commands, PCIe, DMA, or Ethernet commands.
  • As an example, the storage devices 218 can send the in-storage processing outputs 224 to the output coordinator 212 in a number of ways. For example, the output coordinator 212 can utilize the DMA or PIO mechanisms. The DMA can be a remote DMA (rDMA) whereby the transfer is a DMA process from memory of one computer (e.g. the computer running the application 202) into that of another (e.g. one of the storage devices 218 for the in-storage processing) without involving either one's operating system or processor intervention for the actual transfer. As another example, the output coordinator 212 can utilize interface protocol processes, such as background SATA connection or Ethernet.
  • Also for example, the storage devices 218 can send its respective in-storage processing outputs 224 or their locations to the application 202. This can be accomplished without the in-storage processing outputs 224 passing through the output coordinator 212. For this example, the storage devices 218 and the application 202 can interact in a number of ways, such as DMA, rDMA, PIO, back SATA connection, or Ethernet.
  • Regarding the output management 704, the output coordinator 212 can manage the order of the outputs from the storage devices 218. The output management 704 manages the outputs based on multiple constraints, such as size of output, storage capacity of output coordinator 212, and types of the application requests 220 of FIG. 2. The outputs can be the in-storage processing outputs 224. As an example, the output management 704 can order the outputs based on various policies.
  • As a specific example, the outputs or the in-storage processing outputs 224 for each of the sub-application requests 222 of FIG. 2 for in-storage processing can be stored in a sorted order by a sub-request identification 708 per a request identification 710 for the in-storage processing. The request distributor 210 can transform the application request 220 of FIG. 2 into multiple sub-application requests 222 with the formatted data 216 and distributes them to the storage devices 218.
  • After data processing in each of the storage devices 218, the output coordinator 212 gathers the in-storage processing outputs 224 from each of the storage devices 218. The output coordinator 212 may need to preserve the issuing order of application requests 220, the sub-application requests 222, or a combination thereof even though the in-storage processing outputs 224 from the storage devices 218 can be delivered to the output coordinator 212 in an arbitrary order because the data processing time of the storage devices 218 can be different.
  • As an example to implement this order, the storage group 206 of FIG. 2 can assign a sequence number to each of the in-storage processing outputs 224, where each of the in-storage processing outputs 224 also can be composed of multiple sub-outputs. For these sub-outputs, the storage group 206 also assigns sequence numbers or sequence identifications. Once the output coordinator 212 receives each of the in-storage processing outputs 224 or sub-output data from each of the storage devices 218, it can maintain each output's sequence thereby sorting them by sequence numbers or identification. If the order of the in-storage processing outputs 224 or the sub-outputs is not important for application 202, the output coordinator 212 can send the in-storage processing outputs 224 in an out of order manner.
  • The request identification 710 represents information that can be used to demarcate one of the application requests 220 from another. The sub-request identification 708 represents information that can be used to demarcate one of the sub-application requests 222 from another.
  • As an example, the sub-request identification 708 can be unique or associated with a specific instance of the request identification 710. As a further example, the sub-request identification 708 can be non-constrained to a specific instance of the request identification 710.
  • As a more specific example, the output coordinator 212 can include and output buffer 712. The output buffer 712 can store the in-storage processing outputs 224 from the storage devices 218. The output buffer 712 can be implemented in a number of ways. For example, the output buffer 712 can be a hardware implementation of a first-in first-out (FIFO) circuit or of a linked list structure. Also for example, the output buffer 712 can be implemented with memory circuitry with the software providing the intelligence for the FIFO operations, such as pointers, status flags, etc.
  • Also as a specific example, the outputs or the in-storage processing outputs 224 for each of the sub-application requests 222 can be added to the output buffer 712. The in-storage processing outputs 224 can be fetched from the output buffer 712 as long as the output for the desired instance of the sub-application requests 222 is in the output buffer 712. The sub-request identification 708 can be utilized to determine whether the associated in-storage processing output 224 has been stored in the output buffer 712. The request identification 710 can also be utilized, such as an initial determination.
  • Continuing the example for various embodiments, the output coordinator 212 can collect the in-storage processing output 224 from the storage devices 218. To guarantee the data integrity of the in-storage processing outputs 224, the output coordinator 212 can maintain the sequence of each of the in-storage processing outputs 224 or sub-outputs data in a correct order. For this, the output coordinator 212 can utilize the sub-request identification 708 or the request identification 710 (e.g. if each of the in-storage processing output 224 of each of the storage devices 218 also reuses the same identification as their output sequence number or output sequence identification). Since the processing times of each of the storage devices 218 can be different, the output coordinator 212 can temporarily store each of the in-storage processing outputs 224 or sub-output data into output buffer 712 to make them all sequential (i.e., correct data order). If there exists any missing in-storage processing output 224 or sub-output (that is, a hole in the sequence IDs), the application 202 cannot get the output data until all the in-storage processing outputs 224 are correctly collected in the output buffer 712.
  • As a further specific example, the outputs or the in-storage processing outputs 224 for each of the sub-application requests 222 can be sent to the application 202 without passing through the output coordinator 212 or the output buffer 712 in the output coordinator 212. In this example, the in-storage processing outputs 224 can be sent from the storage devices 218 without being stored before reaching the application 202.
  • Regarding the output retrieval 706, once the output or the in-storage processing outputs 224 are known, the application 202 can retrieve the in-storage processing outputs 224 in a number of ways. In some embodiments, the output retrieval 706 can include the in-storage processing outputs 224 passing through the output coordinator 212. In other embodiments, the output retrieval 706 can include the in-storage processing outputs 224 being sent to the application 202 without passing through the output buffer 712.
  • As an example, the outputs or the in-storage processing outputs 224 can be passed from the storage devices 218 to the output coordinator 212. The output coordinator 212 can store the in-storage processing outputs 224 in the output buffer 712. The output coordinator 212 can then send the in-storage processing outputs 224 to the application 202.
  • Also as an example, the outputs or the in-storage processing outputs 224 can be passed from the storage devices 218 to the output coordinator 212. The output coordinator 212 can send the in-storage processing outputs 224 to the request distributor 210. The request distributor 210 can send the in-storage processing outputs 224 to the application 202. In the example, the output buffer 712 can be within the output coordinator 212, the request distributor 210, or a combination thereof.
  • Further as an example, the outputs or the in-storage processing outputs 224 can be passed from the storage devices 218 to the application 202. In this example, this transfer is direct without the in-storage processing outputs 224 to pass through the output coordinator 212, the request distributor 210, or a combination thereof.
  • The output coordinator 212 can be implemented in a number of ways. For example, the output coordinator 212 can be implemented in hardware circuitry, such as a processor, an application specific integrated circuit (ASIC) an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), FPGA, or a combination thereof. Also for example, the output coordinator 212 can implemented with software. Further for example, the output harvest 702, the output management 704, the output retrieval 706, or a combination thereof can be implemented with hardware circuitry, with the examples noted earlier, or by software.
  • Similarly the request distributor 210 can be implemented in a number of ways. For example, the request distributor 210 can be implemented in hardware circuitry, such as a processor, an application specific integrated circuit (ASIC) an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), FPGA, or a combination thereof. Also for example, the output coordinator 212 can implemented with software.
  • Referring now to FIGS. 8A and 8B, therein are shown detailed examples of an operational view of the split and split+padding functions. FIGS. 8A and B depict embodiments for an in-storage processing (ISP)-aware RAID. Various embodiments can be applied to an array configuration for the storage devices 218 of FIG. 2 or the storage group 206 of FIG. 2. Examples of RAID functions include striping, mirroring, or a combination thereof
  • FIGS. 8A and 8B depict examples of the application data 214 and the application units 304. The application units 304 can be processed by the in-storage processing coordinator 204 of FIG. 2. FIGS. 8A and 8B each depicts one example.
  • The example in FIG. 8A depicts the application data 214 undergoing the split function, similarly to the one described in FIG. 3. This depiction can also represent a striping function in a RAID application.
  • The example in FIG. 8B depicts the application data 214 undergoing a split+padding function, similarly to the one described in FIG. 4. This depiction can also represent a striping function in a RAID application but for various embodiments providing the in-storage processing for split+padding function.
  • Describing FIG. 8A, this part depicts the formatted data 216 and the formatted units 306. In this example, the formatted data 216 is split and sent to two of the storage devices 218. Each of the formatted units 306 includes one or more of the application units 304, such as FU0 in DEV1, which can include AU0 and AU1. These application units 304, such as AU0, AU1, AU2, AU4, etc., can be each entirely contained in one of the formatted units 306 or traverse or span across multiple formatted units 306, such as AU3, AU6, AU8, etc. As described in FIG. 3, some of the application units 304 are aligned with the formatted units 306 while others are not.
  • In this example, there are shown 10 of the application units 304 being split into the formatted units 306 that are sent to two of the storage devices 218. In this example application units 304 labeled as AU1, AU3, AU5, AU6, and AU8 are not aligned. These non-aligned application units 304 can be identified with in-storage processing and separately processed by host systems or cooperatively with other storage devices 218. Therefore, the application requests 220 of FIG. 4 for in-storage processing can be serialized and more complex request coordination could be required.
  • Describing FIG. 8B, this part depicts the formatted data 216 and the formatted units 306, as in FIG. 8A. As in the left-side, this example depicts the formatted data 216 being split in some form and sent to two of the storage devices 218. In this example, each of the application units 304, such as AU0, AU1, AU2, AU3, etc., can be aligned with one of the formatted units 306 with one of the data pads 402, as similarly described in FIG. 4.
  • In this example for use in ISP-aware RAID, the application units 304 is pre-processed and aligned by split+padding policy, allowing each of the application requests 220 for in-storage processing to be independent. This independence can maximize the opportunity for efficient, concurrent processing since no additional phase of processing is required for the formatted units 306 with the aligned application units 304, compared with the non-aligned units.
  • Referring now to FIG. 9, therein is shown an example of an architectural view of the computing system 900 in an embodiment. The computing system 900 can be an embodiment of the computing system 100 of FIG. 1.
  • In this embodiment as an example, FIG. 9 depicts the in-storage processing coordinator 904 in a centralized coordination model. In this model, the in-storage processing coordinator 904 is separate from or external to the host computer 102 and the storage devices 218. The term separate and external represents that the in-storage processing coordinator 904 is in a separate system to the host computer 102 and the storage devices 218 can be housed separate system housing.
  • In this example, the host computer 102 can be executing the application 202 of FIG. 2. The host computer 102 can also provide file and object services. Further to this example, the in-storage processing coordinator 904 can be included as part of the network 120 of FIG. 1, the data storage system 101 of FIG. 1, implemented external to the host computer 102, or a combination thereof. As previously described in FIG. 2 and other figures earlier, the in-storage processing coordinator 904 can include the request distributor 910, the data preprocessor 908, and the output coordinator 912.
  • Continuing with this example, each of the storage devices 218 performs the in-storage processing functions. Each of the storage devices 218 can include an in-storage processing engine 922. The in-storage processing engine 922 can perform the in-storage processing for its respective storage device 218.
  • The storage devices 218 can be located in a number of places within the computing system 100. For example, the storage devices 218 can be located within the data storage system 101 of FIG. 1, as part of the network 120 of FIG. 1, the hard disk drive 116 of FIG. 1 or storage external to the host computer 102, or as part of the network attached storage 122 of FIG. 1.
  • In various embodiments in a centralized coordination model as in this example, the in-storage processing coordinator 904 can function with the storage devices 218 in a number of ways. For example, the storage devices 218 can be configured to support various functions, such as RAID 0, 1, 2, 3, 4, 5, 6, and object stores.
  • The in-storage processing engine 922 can be implemented in a number of ways. For example, in-storage processing engine 922 can be implemented with software, hardware circuitry, or a combination thereof. Examples of hardware circuitry can include a processor, an application specific integrated circuit (ASIC) an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), FPGA, or a combination thereof.
  • Referring now to FIG. 10, therein is shown an example of an architectural view of the computing system 1000 in a further embodiment. The computing system 1000 can be an embodiment of the computing system 100 of FIG. 1.
  • In this embodiment as an example, FIG. 10 depicts the in-storage processing coordinator 1004 in a centralized coordination model. In this model, the in-storage processing coordinator 1004 is internal to the host computer 102. The term internal represents that the in-storage processing coordinator 1004 is in the same system to the host computer 102 and is generally housed in the same system housing as the host computer 102. This embodiment also has the in-storage processing coordinator 1004 as a separate from or external to the storage devices 218.
  • In this embodiment as an example, the host computer 102 can include the in-storage processing coordinator 1004 as well as the file object services. In this example, the host computer 102 can execute the application 202 of FIG. 2. As previously described in FIG. 2 and other figures earlier, the in-storage processing coordinator 1004 can include the request distributor 1010, the data preprocessor 1008, and the output coordinator 1012.
  • Continuing with this example, each of the storage devices 218 performs the in-storage processing function. Each of the storage devices 218 can include an in-storage processing engine 1022. The in-storage processing engine 1022 can perform the in-storage processing for its respective storage device 218.
  • The storage devices 218 can be located in a number of places within the computing system 100. For example, the storage devices 218 can be located within the data storage system 101 of FIG. 1, as part of the network 120 of FIG. 1, the hard disk drive 116 of FIG. 1 or storage external to the host computer 102 or as part of the network attached storage 122 of FIG. 1.
  • Various embodiments in a centralized model as in this example, the in-storage processing coordinator 1004 can function with the storage devices 218 in a number of ways. For example, the storage devices 218 can be configured to support various functions, such as RAID 0, 1, 2, 3, 4, 5, 6, and object stores.
  • The in-storage processing engine 1022 can be implemented in a number of ways. For example, in-storage processing engine 1022 can be implemented with software, hardware circuitry, or a combination thereof. Examples of hardware circuitry can include similar examples as in FIG. 9. The functions for this embodiment will be described in detail later.
  • Referring now to FIG. 11, therein is shown an example of an architecture view of the computing system 1100 in a yet further embodiment. The computing system 1100 can be an embodiment of the computing system 100 of FIG. 1.
  • In this embodiment as an example, FIG. 11 depicts the in-storage processing coordinator 1104 in a decentralized coordination model. In this example, the in-storage processing coordinator 1104 is partitioned between the host computer 102 and the storage devices 218. Additional examples of operational flow for this model are described in FIG. 15 and in FIG. 16.
  • As previously described in FIG. 2 and other figures earlier, the in-storage processing coordinator 1104 can include the request distributor 1110, the data preprocessor 1108, or a combination thereof. In this embodiment as an example, the data preprocessor 1108 and at least a portion of the request distributor 1110 are internal to the host computer 102. The term internal represents that the request distributor 1110 and the data preprocessor 1108 are in the same system to the host computer 102 and housed in the system housing as the host computer 102.
  • Also, this embodiment has the output coordinator 1112 and at least a portion of the request distributor 1110 separate or external to the host computer 102. As a specific example, this embodiment provides the output coordinator 1112 and at least a portion of the request distributor 1110 as internal to the storage devices 218.
  • In this example, the host computer 102 can execute the application 202 of FIG. 2. Continuing with this example, each of the storage devices 218 performs the in-storage processing function. Each of the storage devices 218 can include an in-storage processing engine 1122. The in-storage processing engine 1122 can perform the in-storage processing for its respective storage device 218.
  • The storage devices 218 can be located in a number of places within the computing system 100. For example, the storage devices 218 can be located within the data storage system 101 of FIG. 1, as part of the network 120 of FIG. 1, the hard disk drive 116 of FIG. 1 or storage external to the host computer 102 or as part of the network attached storage 122 of FIG. 1.
  • In various embodiments in a decentralized model as in this example, this partition of the in-storage processing coordinator 1104 can function with the storage devices 218 in a number of ways. For example, the storage devices 218 can be configured to support various functions, such as RAID 1 and object stores.
  • The in-storage processing engine 1122 can be implemented in a number of ways. For example, in-storage processing engine 1122 can be implemented with software, hardware circuitry, or a combination thereof. Examples of hardware circuitry can include similar examples as in FIG. 9.
  • Referring now to FIG. 12, therein is shown an example of an operational view of the computing system 100 for in-storage processing in a centralized coordination model. FIG. 12 can represent embodiments for the centralized coordination model described from FIG. 9 or FIG. 10.
  • FIG. 12 depicts the in-storage processing coordinator 204 and the interaction between the request distributor 210 and the data preprocessor 208 for the centralized coordination model. FIG. 12 also depicts the output coordinator 212. FIG. 12 also depicts the in-storage processing coordinator 204 interacting with the storage devices 218.
  • As an operational example, FIG. 12 depicts the in-storage processing coordinator 204 issuing the device requests 1202 for in-storage processing, such as write requests to the storage devices 218. The request distributor 210 can receive the application requests 220 of FIG. 2 for writing the application data 214. The request distributor 210 can also receive a data address 1204 as well as the transfer length 302 and a logical boundary 1206 of the application units 304. The data address 1204 can represent the address for the application data 214. The logical boundary 1206 represents the length or size of each of the application units 304.
  • Continuing with the example, the request distributor 210 can send information to the data preprocessor 208 to translate the application data 214 to the formatted data 216. The request distributor 210 can also send the transfer length 302 for the application data 214. The application data 214 can be sent to the data preprocessor 208 as the application units 304 or the logical boundaries to the application units 304.
  • Furthering the example, the data preprocessor 208 can translate the application data 214 or the application units 304 to generate the formatted data 216 or the formatted units 306 of FIG. 3. Examples of the types of translation can be one of the methods described in FIG. 2 and FIG. 3 through FIG. 6. The data preprocessor 208 can return the formatted data 216 or the formatted units 306 to the request distributor 210. The request distributor 210 can generate and issue device requests 1202 for writes to the storage devices 218 based on the formatting policies and policy for storing or for in-storage processing of the formatted data 216 or the formatted units 306. The device requests 1202 are based on the application requests 220.
  • Further continuing with the example, each of the storage devices 218 can include an in-storage processing function or application and the in-storage processing engine 922. Each of the storage devices 218 can receive the device requests 1202 and at least a portion of the formatted data 216.
  • For illustrative purposes, although FIG. 12 depicts the device requests 1202 being issued to all of the storage devices 218, it is understood that the request distributor 210 can operate differently. For example, the device requests 1202 can be issued to some of the storage devices 218 and not necessarily to all of them. Also for example, the device requests 1202 can be issued at different times or can be issued as part of the error handling examples as discussed in FIG. 2.
  • As a specific example for a centralized coordination model, the in-storage processing coordinator 204 can receive all the application requests 220 from the application 202, can issue all the device requests 1202 to the storage devices 218, or a combination thereof. The request distributor 210 can send or distribute the device requests 1202 to multiple storage devices 218 based on a placement scheme. The output coordinator 212 can collect and manage the in-storage processing outputs 224 from the storage devices 218. The output coordinator 212 can then send the in-storage processing outputs 224 to the application 202 of FIG. 2 as similarly described in FIG. 7.
  • Referring now to FIG. 13, therein is shown an example of an operational view of the computing system 1300 issuing data write requests to the storage devices 1318 for in-storage processing in a decentralized coordination model. The computing system 1300 can include similarities to the computing system 1100 of FIG. 11. FIG. 13 depicts the in-storage processing coordinator 1304 including the request distributor 1310 and the data preprocessor 1308.
  • Both FIG. 12 and FIG. 13 depict an example of an operational view of computing system 1300 in terms of storing data to the storage devices 218. That is, both FIGS. 12 and 13 focus on how to efficiently store data across the storage devices 218 for in-storage processing.
  • FIG. 13 also depicts the output coordinator 1312 and a portion of the request distributor 1310 in each of the devices 1318. FIG. 13 also depicts the in-storage processing coordinator 1304 interacting with the devices 1318.
  • As an operational example, FIG. 13 depicts the in-storage processing coordinator 1304 issuing the device requests 1302 as write requests to the devices 1318. The request distributor 1310 in the in-storage processing coordinator 1304 can receive the application requests 220 of FIG. 2 for writing the application data 214 of FIG. 2. The request distributor 1310 can also receive a data address 1204 as well as the transfer length 302 of FIG. 3 and the logical boundary of the application units 304 of FIG. 3. The data address 1204 can represent the address for the application data 214.
  • Continuing with the example, the request distributor 1310 can send information to the data preprocessor 1308 to translate the application data 214 to the formatted data 216 of FIG. 2. The request distributor 1310 can also send the transfer length 302 for the application data 214. The application data 214 can be sent as the application units 304 or the logical boundaries to the application units 304 to the data preprocessor 1308.
  • Furthering the example, the data preprocessor 1308 can translate the application data 214 or the application units 304 to generate the formatted data 216 or the formatted units 306 of FIG. 3. Examples of the types of translation can be one of the methods described in FIG. 2 and FIG. 3 through FIG. 6. The data preprocessor 1308 can return the formatted data 216 or the formatted units 306 to the request distributor 1310 in the in-storage processing coordinator 1304. The request distributor 1310 can generate and issue the application requests 220 for writes to the devices 1318 based on the formatting policies and policy for storing or for in-storage processing of the formatted data 216 or the formatted units 306.
  • Further continuing with the example, each of the devices 1318 can include an in-storage processing function or application and the in-storage processing engine 1322. Each of the devices 1318 can receive the device requests 1302 and at least a portion of the formatted data 216. Each of the devices 1318 can also include the output coordinator 1312, a portion of the request distributor 1310, or a combination thereof.
  • For illustrative purposes, although FIG. 13 depicts the device requests 1302 being issued to all of the devices 1318, it is understood that the request distributor 1310 can operate differently. For example, the device requests 1302 can be issued to some of the devices 1318 and not necessarily to all of them. Also for example, the device requests 1302 can be issued at different times or can be issued as part of the error handling examples as discussed in FIG. 2.
  • As a specific example for a decentralized coordination model, the in-storage processing coordinator 1304 can receive the application requests 220 from the application 202, can issue the device requests 1302 to the devices 1318, or a combination thereof. The request distributor 1310 in the in-storage processing coordinator 1304 can send or distribute the device requests 1302 to multiple devices 1318 based on a placement scheme.
  • Continuing with the specific example, the request distributor 1310 in each of the devices 1318 can receive the request from the in-storage processing coordinator 1304. The output coordinator 1312 can collect and manage the in-storage processing outputs 224 from the devices 1318 or one of the devices 1318.
  • Also as a specific example for a decentralized coordination model, there are various communication methods depending on the configuration of the storage group 206. The functions of the request distributor 1310 and the output coordinator 1312 in the devices 1318 in a decentralized coordination model will be described later.
  • Referring now to FIG. 14, therein is shown an operational view for the computing system 100 for in-storage processing in a centralized model. FIG. 14 depicts the in-storage processing coordinator 904 to be external to both the host computer 102 and the storage devices 218. Although the application 202 is shown outside of the host computer 102, it is understood that the application 202 can be executed by the host computer 102 as well as outside of the host computer 102. In addition, although the in-storage processing coordinator 904 is external to the host in FIG. 14, it is also understood that the in-storage processing coordinator 904 can be internal to the host, like in FIG. 10.
  • FIG. 14, FIG. 15, and FIG. 16 depict an example of an operational view of computing system 1300 of FIG. 13 in terms of processing data in the storage devices 218. That is, FIGS. 14, 15, and 16 focus on how to efficiently process/compute the stored data in the storage devices 218 with in-storage processing techniques.
  • In this example, the application 202 can issue application requests 220 for in-storage processing to the host computer 102. The host computer 102 can issue host requests 1402 based on the application requests 220 from the application 202. The host requests 1402 can be sent to the in-storage processing coordinator 904.
  • The in-storage processing coordinator 904 can translate the application data 214 of FIG. 2 and the application units 304 of FIG. 3 to generate the formatted data 216 of FIG. 2 and the formatted units 306 of FIG. 3. The in-storage processing coordinator 904 can also generate the device requests 1202 to the storage devices 218. The in-storage processing coordinator 904 can also collect and manage the in-storage processing outputs 224 from the storage devices 218, and can deliver an aggregated output 1404 back to the host computer 102, the application 202, or a combination thereof. The aggregated output 1404 is the combination of the in-storage processing outputs 224 from the storage devices 218. The aggregated output 1404 can be more than concatenation of the in-storage processing outputs 224.
  • As a specific example, the in-storage processing coordinator 904 can include the request distributor 910. The request distributor 910 can receive the application requests 220 as the host requests 1402. The request distributor 910 can generate the device requests 1202 from the host requests 1402. The request distributor 910 can also generate the sub-application requests 222 of FIG. 7 as the device requests 1202.
  • As a further specific example, the in-storage processing coordinator 904 can include the data preprocessor 908. The data preprocessor 908 can receive the information from the application requests 220 or the host requests 1402 through the request distributor 910. The data preprocessor 908 can format the application data 214 as appropriate based on the placement scheme onto the storage devices 218.
  • Also as a specific example, the in-storage processing coordinator 904 can include the output coordinator 912. The output coordinator 912 can receive the in-storage processing outputs 224 from the storage devices 218. The output coordinator 912 can generate the aggregated output 1404 with the in-storage processing outputs 224. In this example, the output coordinator 912 can return the aggregated output 1404 to the host computer 102. The host computer 102 can also return the aggregated output 1404 to the application 202. The application 202 can continue to execute and utilize the in-storage outputs 224, the aggregated output 1404, or a combination thereof.
  • In this example, each of the storage devices 218 includes the in-storage processing engine 922. The in-storage processing engine 922 can receive and operate on specific instance of the device requests 1202. The in-storage processing engine 922 can generate in-storage processing output 224 to be returned to the in-storage processing coordinator 904 or as a specific example to the output coordinator 912.
  • Referring now to FIG. 15, therein is shown an operational view for a computing system 1500 in a decentralized model in an embodiment with one output coordinator 1512. The computing system 1500 can be the computing system 1100 of FIG. 11.
  • As an operational overview of this embodiment, the host computer 102 can issue an application request 220 to the storage devices 218 for in-storage processing. The host computer 102 and the storage devices 218 can be similarly partitioned as described in FIG. 11. Each of the storage devices 218 can perform the in-storage processing. Each of the storage devices 218 can provide its in-storage processing output 224 to the storage device 218 that received the application request 220 from the host computer 102. This storage device 218 can then return an aggregated output 1504 back host computer 102, the application 202, or a combination thereof. The application 202 can continue to execute and utilize the in-storage outputs 224, the aggregated output 1504, or a combination thereof.
  • Continuing with the example, the application request 220 can be issued to one of the storage devices 218. That one storage device 218 can issue the application request 220 or a device request 1202 to the other storage devices 218. As an example, the storage device 218 that received the application request 220 can decompose the application request 220 to partition the in-storage processing to the other storage devices 218. The device request 1202 can be that partitioned request based off the application request 220 and the in-storage processing execution by the previous storage devices 218.
  • This example depicts a number of the devices labeled as “DEV_1”, “DEV_2”, “DEV_3”, and through “DEV_N”. The term “N” in the figure is an integer. The storage devices 218 in this example can perform in-storage processing. Each of the storage devices 218 are shown including an in-storage processing engine 1522, a data preprocessor 1508, and an output coordinator 1512.
  • For illustrative purposes, all of the storage devices 218 are shown with the output coordinator 1512, although it is understood that the computing system 1500 can partitioned differently. For example, only one of the storage devices 218 can include the output coordinator 1512. Further for example, the output coordinator 1512 in each of the storage devices 218 can operate differently from another. As a specific example, the output coordinator 1512 in DEV_2 through DEV_N can act as pass through to the next storage device 218 or to return the in-storage processing output 224 back to DEV_1. Each of the storage devices 218 can manage it request identification 710 of FIG. 7, the sub-request identification 708 of FIG. 7, or a combination thereof.
  • In this example, the host computer 102 can send the application request 220 to one of the storage devices 218 labeled DEV_1. The in-storage processing engine 1522 in DEV_1 can perform the appropriate level of in-storage processing and generates the in-storage processing output 224. In this example, the in-storage processing output 224 from DEV_1 can be referred to as a first output 1524.
  • Continuing with this example, the data preprocessor 1508 in DEV_1 can format or translate the information from the application request 220 that will be forwarded to DEV_2, DEV_3, and through to DEV_N. The in-storage processing engine 1522 in DEV_2 can generate the in-storage processing output 224 and can be referred to a second output 1526. The output coordinator 1512 in the DEV_2 can send the second output 1528 to DEV_1. The in-storage processing engine 1522 in DEV_3 can generate the in-storage processing output 224 and can be referred to a third output 1528. The output coordinator 1512 in the DEV_3 can send the third output 1528 to DEV_1. The in-storage processing engine 1522 in DEV_N can generate the in-storage processing output 224 and can be referred to an Nth output. The output coordinator 1512 in the DEV_N can send the Nth output to DEV_1. The output coordinator 1512 in DEV_1 generates the aggregated output 1504 that includes the first output 1524, the second output 1526, the third output 1528, and through the Nth output.
  • Referring now to FIG. 16, therein is shown an operational view for a computing system 1600 in a decentralized model in an embodiment with multiple output coordinators 1612. The computing system 1600 can be the computing system 1100 of FIG. 11.
  • As an operational overview of this embodiment, the host computer 102 can issue an application request 220 to storage devices 218 for in-storage processing. The host computer 102 and the storage devices 218 can be similarly partitioned as described in FIG. 11. The application request 220 can be issued to one of the storage devices 218. That storage device 218 then performs the in-storage processing. The execution of the application request 220 and the in-storage processing results is issued or sent to another of the storage devices 218. This process can continue until all the storage devices 218 performed the in-storage processing and the last of the storage devices 218 can return the result to the first of the storage devices 218. That first of the storage devices 218 then returns an aggregated output 1604 back host computer 102, the application 202, or a combination thereof. The application 202 can continue to execute and utilize the in-storage outputs 224 of FIG. 2, the aggregated output 1604, or a combination thereof.
  • For illustrative purposes, this embodiment is described with DEV_1 providing the aggregated output 1604 to the host computer 102, although it is understood that this embodiment can operate differently. For example, the last device or DEV_N in this example can provide the aggregated output 1604 back to the host computer 102 instead of DEV_1.
  • This example depicts a number of the storage devices 218 labeled as “DEV_1”, “DEV_2”, “DEV_3”, and through “DEV_N”. The term “N” in the figure is an integer. The storage devices 218 in this example can perform in-storage processing. Each of the storage devices 218 are shown including an in-storage processing engine 1622, a data preprocessor 1608, and an output coordinator 1612.
  • For illustrative purposes, all of the storage devices 218 are shown with the output coordinator 1612, although it is understood that the computing system 1600 can partitioned differently. For example, only one of the storage devices 218 can include the output coordinator 1612 with full functionality. Further for example, the output coordinator 1612 in each of the storage devices 218 can operate differently from another. As a specific example, the output coordinator 1612 in DEV_2 through DEV_N can act as pass through to the next storage device 218 or to return the aggregated output 1604 back to DEV_1.
  • In this example, the host computer 102 can send the application request 220 to one of the storage devices 218 labeled DEV_1. The in-storage processing engine 1622 in DEV_1 can perform the appropriate level of in-storage processing and can generate the in-storage processing output 224. In this example, the in-storage processing output 224 from DEV_1 can be referred to a first output 1624. In this example, the DEV_1 can decompose the application request 220 to partition the in-storage processing to DEV_2. The device request 1202 of FIG. 12 can be that partitioned request based off the application request 220 and the in-storage processing execution DEV_1. This process of decomposing and partitioning can continue through DEV_N.
  • Continuing with this example, the data preprocessor 1608 in DEV_1 can format or translate the information from the application request 220 that will be forwarded to DEV_2. The data preprocessor 1608 in DEV_1 can also format or translate the in-storage processing output 224 from DEV_1 or the first output 1624.
  • Furthering this example, the output coordinator 1612 in DEV_1 can send the output of the data preprocessor 1608 in DEV_1, the first output 1624, a portion of the application request 220, or a combination thereof to DEV_2. DEV_2 can continue the in-storage processing of the application request 220 sent to DEV_1.
  • Similarly, the in-storage processing engine 1622 in DEV_2 can perform the appropriate level of in-storage processing based on the first output 1624 and can generate the in-storage processing output 224 from DEV_2. In this example, the in-storage processing output 224 from DEV_2 can be referred to a second output 1626 as “a partial aggregated output.”
  • Continuing with this example, the data preprocessor 1608 in DEV_2 can format or translate the information from the application request 220 or the second output 1626 that will be forwarded to DEV_3. The data preprocessor 1608 in DEV_2 can also format or translate the in-storage processing output 224 from DEV_2 or the second output 1626.
  • Furthering this example, the output coordinator 1612 in DEV_2 can send the output of the data preprocessor 1608 in DEV_2, the second output 1626, a portion of the application request 220, or a combination thereof to DEV_3. DEV_3 can continue the in-storage processing of the application request 220 sent to DEV_1.
  • Similarly, the in-storage processing engine 1622 in DEV_3 can perform the appropriate level of in-storage processing based on the second output 1626 and can generate the in-storage processing output 224 from DEV_3. In this example, the in-storage processing output 224 from DEV_3 can be referred to a third output 1628.
  • Continuing with this example, the data preprocessor 1608 in DEV_3 can format or translate the information from the application request 220 or the third output 1628 that will be forwarded to DEV_1. The data preprocessor 1608 in DEV_2 an also format or translate the in-storage processing output 224 from DEV_3 or the third output 1628.
  • Furthering this example, the output coordinator 1612 in DEV_3 can send the output of the data preprocessor 1608 in DEV_3, the third output 1628, a portion of the application request 220, or a combination thereof to DEV_1. DEV_1 can return to the host computer 102 or the application 202 the aggregated output 1604 based on the first output 1624, the second output 1626, and the third output 1628.
  • In this example, in-storage processing by one of the storage devices 218 that follows a previous storage device 218 can aggregate the in-storage processing outputs 224 of the storage devices 218 that preceded it. In other words, the second output 1626 is an aggregation of the in-storage processing output 224 from the DEV_2 as well as the first output 1624. The third output 1628 is an aggregation of the in-storage processing output from DEV_3 as well as the second output 1626.
  • Referring now to FIG. 17, therein is shown an example of a flow chart for the request distributor 210 and the data preprocessor 208. The request distributor 210 and the data preprocessor 208 can be operated in a centralized or decentralized model as described earlier, as examples.
  • As an overview of this example, this flow chart depicts how the application data 214 of FIG. 2 can be translated to the formatted data 216 of FIG. 2 based on the storage policies. As examples, the storage policies can include the split policy, the split+padding policy, the split+redundancy policy, and storage without any chunking of the application units 304 of FIG. 3 to the formatted units 306 of FIG. 3. This example can represent the application request 220 of FIG. 2 as a write request.
  • The request distributor 210 of FIG. 2 can receive the application request 220 directly or some form the application request 220 through the host computer 102 of FIG. 1. The application request 220 can include information such as the data address 1204 of FIG. 12, the application data 214, the transfer length 302 of FIG. 3, the logical boundary 1206 of FIG. 12, or a combination thereof.
  • As an example, the request distributor 210 can execute a chunk comparison 1702. The chunk comparison 1702 compares the transfer length with a chunk size 1704 of the storage group 206, in this example operating as a RAID system. The chunk size 1704 represents a discrete unit of storage size to be stored in the storage devices 218 of FIG. 2 in the storage group 206 of FIG. 2. As an example, the chunk size 1704 can represent the size of one of the formatted units 306.
  • If the chunk comparison 1702 determines the transfer length 302 is greater than the chunk size 1704, the handling of the application request 220 can continue to a boundary query 1706. If the chunk comparison 1702 determines that the transfer length is not greater than the chunk size 1704, the handling of the application request 220 can continue to a device selection 1708.
  • The branch of the flow chart starting with the device selection 1708 represents the handling of the application data 214 without chunking of the application units 304 or the application data 214. An example of this can be the mirroring function as described in FIG. 6.
  • Continuing with this branch of the flow chart, the device selection 1708 determines which of the storage devices 218 in the storage group 206 will store the application data 214 as part of the application request 220. The request distributor 210 can generate the device requests 1202 of FIG. 12 as appropriate based on the application request 220.
  • When the logical boundary 1206 of FIG. 12 for the application units 304 are included with the application request 220, the request distributor 210 can distribute the application request 220 by splitting the application request 220 to sub-application requests 222 of FIG. 2 or by sending identical application requests 220 to multiple storage devices 218.
  • In the example for the sub-application requests 222, each of the sub-application requests 222 can make the size of each of the sub-application requests 222 to be a multiple of the logical boundary 1206 of the application units 304. The sub-application requests 222 can be the device requests 1202 issued to the storage devices 218.
  • In the example for identical application requests 220, multiple storage devices 218 can receive these application requests 220. The first in-storage processing output 224 of FIG. 2 returned can be accepted by the output coordinator 212 of FIG. 2 to be returned back to the application 202. The identical application requests 220 can be the device requests 1202 issued to the storage devices 218.
  • When the logical boundary 1206 for the application units 304 is not included, the request distributor 210 can split the application request 220 to the sub-application requests 222. These sub-application requests 222 make the size of each of these requests to be an arbitrary length. The requests can be handled as a split function by the data preprocessor 208. The sub-application requests 222 can be the device requests 1202 issued to the storage devices 218.
  • The request distributor 210, the data preprocessor 208, or a combination thereof can continue from the device selection 1708 to an address calculation 1710. The address calculation 1710 can calculate the address for the application data 214 or the formatted data 216 to be stored in the storage devices 218 receiving the device requests 1202. For illustrative purposes, the address calculation 1710 is described being performed by the request distributor 210 or the data preprocessor 208, although it is understood that the address calculation 1710 can be performed elsewhere. For example, the storage devices 218 receiving the device requests 1202 can perform the address calculation 1710. Also for example, the address can be a pass-through from the application request 220 in which case the address calculation 1710 could have been performed by the application 202 of FIG. 2 or by the host computer 102.
  • The flow chart can continue to a write non-chunk function 1712. Each of the storage devices 218 receiving the device request 1202 can write the application data 214 or the formatted data 216 on the storage device 218. Since each of the storage devices 218 contain the application data 214 in a complete or non-chunked form, any of the application data 214 or the formatted data 216 can undergo in-storage processing by the storage device 218 with the application data 214.
  • Returning to the branch of the flow chart from the boundary query 1706, the boundary query 1706 determines if the logical boundary 1206 is provided in the application request 220, as an example. If the boundary query 1706 determines that the logical boundary 1206 is provided, the flow chart can continue to a padding query 1714. If the boundary query 1706 determines that the logical boundary 1206 is not provided, the flow chart can continue to a normal RAID query 1716.
  • The branch of the flow chart starting with the normal RAID query 1716 represents the handling of the application data 214 with chunking of the application units 304 (or some of the application units 304). An example of this can be the split function described in FIG. 3. As an example, this branch of the flow chart can be used for unstructured application data 214 or for application data 214 with no logical boundary 1206. The chunk size 1704 can be with a fixed size or a variable-length size.
  • Continuing with this branch of the flow chart, the normal RAID query 1716 determines if the application request 220 is for a normal RAID function as the in-storage processing, or not. If so, the flow chart can continue to a chunk function 1718. If not the flow chart can continue to another portion of the flow chart or can return an error status back to the application 202.
  • In this example, the chunk function 1718 can split the application data 214 or the application units 304 or some portion of them in the chunk size 1704 for the storage devices 218 to receive the application data 214. As an example, the data preprocessor 208 can perform the chunk function 1718 to generate the formatted data 216 or the formatted units 306 with the application data 214 translated to the chunk size 1704. The data preprocessor 208 can interact with the request distributor 210 to issue the device requests 1202 to the storage devices 218.
  • For illustrative purposes, the chunk function 1718 is described as being performed by the data preprocessor 208, although it is understood that the chuck function 1718 can be executed differently. For example, the storage devices 218 receiving the device requests 1202 can perform the chunk function 1718 as part of the in-storage processing at the storage devices 218.
  • In this example, the flow chart can continue to a write chunk function 1719. The write chunk function 1719 is an example of the in-storage processing at the storage devices 218. The write chunk function 1719 writes the formatted data 216 or the formatted units 306 at the storage devices 218 receiving the device requests 1202 from the request distributor 210.
  • Returning to the branch of the flow chart from the padding query 1714, the branch below the padding query 1714 represents the handling of the application data 214 or the application units 304 or a portion thereof with the data pads 402. An example of this can be the split+padding function as described in FIG. 4.
  • The padding query 1714 determines if the application data 214 or the application units 304 or some portion of them should be padded to generate the formatted data 216 or the formatted units 306. The data preprocessor 208 can perform the padding query 1714.
  • When the padding query 1714 determines that padding of the application units 304 is needed, the flow chart can continue to an application data sizing 1720. The application data sizing 1720 calculates a data size 1722 of the application data 214 for the split—padding function. The data size 1722 is the amount of the application data 214 to be partitioned for the formatted data 216. As an example, the application data sizing 1720 can determine the data size 1722 for the amount of the application unit 304 or multiple application units 304 for each of the formatted units 306. In this example, each of the formatted units 306 are of the chunk size 1704 and the data size 1722 is per chunk.
  • As a specific example, the data size 1722 can calculated with Equation 1 below.

  • data size 1722=(floor(chunk size 1704/logical boundary 1206))×logical boundary 1206  (Equation 1)
  • In other words, the data size is calculated with the floor function of the chunk size 1704 divided by the logical boundary 1206. The result of the floor function is then multiplied by the logical boundary 1206 to generate the data size 1722.
  • The flow chart can continue to a pad sizing 1724. The pad sizing 1724 calculates a pad size 1726 for the data pads 402 for each of the formatted units 306. As an example, the pad size 1726 can be calculated with Equation 2 below.

  • pad size 1726=chunk size 1704−data size 1722  (Equation 2)
  • In the words, the pad size 1726 per chunk or per each of the formatted units 306 can be calculated with the chunk size 1704 subtracted by the data size 1722 per chunk or per each of the formatted units 306.
  • The flow chart can continue to a chunk number calculation 1728. The chunk number calculation 1728 determines a chunk number 1730 or the number of the formatted units 306 needed for the application data 214. The chunk number 1730 can be used to determine the size or length of the formatted data 216. The data preprocessor 208 can perform the chunk number calculation 1728.
  • The flow chart can continue to a split function 1732. The split function 1732 partitions the application data 214 to the data size 1722 for each of the formatted units 306. The split function 1732 is part of generating the formatted data 216 where the application units 304 are aligned with the chunk size 1704 or the formatted units 306. The data preprocessor 208 can perform the split function 1732.
  • The flow chart can continue to a write pad function 1734. The write pad function 1734 performs the in-storage processing of writing the formatted data 216 with the application data 214 partitioned to the data size 1722 and with the data pads 402. The data pads 402 can include additional information, such as parity, metadata, synchronization fields, or identification fields. The request distributor 210 can send the device requests 1202 to the storage devices 218 to perform the write pad function 1734 of the formatted data 216.
  • Returning to the padding query 1714, when the padding query 1714 determines that padding of the application units 304 is not needed, the flow chart can continue to a redundancy query 1736. When the redundancy query 1736 determines that redundancy of the application data 214 is needed, then this branch of the flow chart represents the redundancy function. As an example, the redundancy function is described in FIG. 6.
  • The flow chart can continue from the redundancy query 1736 to the application data sizing 1720. As an example, FIG. 17 depicts the application data sizing 1720 under the redundancy query 1736 to be a separate function from the application data sizing 1720 under the padding query 1714, although it is understood that the two functions can perform the same operations and can also be the same function. The application data sizing 1720 under the redundancy query 1736 can be computed using the expression found in Equation 1 described earlier.
  • The flow chart can continue to a chunk function 1718. The chunk function 1718 splits or partitions the application data 214 to the formatted data 216 as described in FIG. 6. The data preprocessor 208 can perform the chunk function 1718. As an example, FIG. 17 depicts the chunk function 1718 under the normal RAID query 1716 to be a separate function from the chunk function 1718 under the redundancy query 1736, although it is understood that the two functions can perform the same operations and can also be the same function.
  • The flow chart can continue to a redundancy function 1738. For each chunk or for each of the formatted units 306, the redundancy function 1738 copies that application data 214 that is in the range of the data size 1722 and the chunk size 1704 to additional chunks to generate the replica data 602 of FIG. 6.
  • The flow chart can continue to a write redundancy function 1740. The write redundancy function writes formatted data 216 including the application data 214 and the replica data 602. The request distributor 210 as issue device requests 1202 to the storage devices 218 to perform the write redundancy function 1740. Returning to the branch with the redundancy query 1736, when the redundancy query 1736 determines that redundancy is not needed, the flow chart can continue to the normal RAID query 1716.
  • For illustrative purposes, the flow chart is described with the split+padding function separately from the redundancy function, although it is understood that the flow chart can provide a different operation. For example, the flow chart can be arranged to provide the split+redundancy function as described in FIG. 5. As an example, this can be accomplished with the redundancy query 1736 being placed before the write pad function 1734. Furthering this example, the redundancy function 1738 above could be modified to operate only on the non-aligned application units 304 to form the aligned data 504 of FIG. 5 as opposed to the replica data 602. The modified redundancy function can be followed by a further write function. The further write function would combine portions of the write pad function 1734 and the write redundancy function 1740. The write pad function 1734 be utilize a portion of the formatted data 216 with the data pads 402 and the write redundancy function 1740 can write the aligned data 504 as opposed to the replica data 602.
  • Referring now to FIG. 18, therein is shown an example of a flow chart for a mirroring function for centralized and decentralized embodiments. As examples, the centralized embodiment can be the computing system 900 of FIG. 9 or the computing system 1000 of FIG. 10. As an example, the decentralized embodiment can be the computing system 1100 of FIG. 11.
  • The flow chart on the left-hand side of FIG. 18 represents an example of a flow chart for a centralized embodiment. The flow chart on the right-hand side of FIG. 18 represents an example of a flow chart for a decentralized embodiment.
  • Starting with the centralized embodiment, the request distributor 210 of FIG. 2 can receive the application request 220 of FIG. 2. The application request 220 can include the data address 1204 of FIG. 12, the application data 214 of FIG. 2, and the transfer length 302 of FIG. 3.
  • For example, the data preprocessor 208 of FIG. 2 can execute a replica query 1802. The replica query 1802 determines if the replica data 602 of FIG. 6 should be created or not. As an example, the replica query 1802 can make this determines by comparing if a number 1804 of replica data 602 being requested is greater than zero. If so, the flow chart can continue to a create replica 1806. If not, the flow chart can continue to the device selection 1708.
  • As an example, the device selection 1708 can be the same function or perform the same or similar function as described in FIG. 17. The flow chart can continue to the address calculation 1710. As with the device selection 1708, the address calculation 1710 can be the same function or perform the same or similar function as described in FIG. 17. The flow chart can continue to the write non-chunk function 1712. As with the address calculation 1710, the write non-chunk function 1712 can be the same function or perform the same or similar function as described in FIG. 17.
  • As an example, the request distributor 210 can execute the device selection 1708, the address calculation 1710, or a combination thereof include the outputs of these operations as part of the device request 1202 of FIG. 12. The write non-chunk function 1712 can be performed by one of the storage devices 218 to store the application data 214.
  • Returning to the replica query 1802, when the replica query 1802 determines the replica data 602 of FIG. 6 should be generated, then the flow chart can continue to the create replica 1806. As an example, the replica query 1802 can make this determination when the number 1804 of replica sought is greater than zero.
  • In this example, the create replica 1806 can generate the replica data 602 from the application data 214. The replica data 602 can be as described in FIG. 6. As an example, the data preprocessor 208 can perform the create replica 1806. The create replica 1806 can generate the number 1804 of the replica data 602 as needed and not just one.
  • The flow chart can continue to a prepare replica 1808. As an example, the request distributor 210 can prepare each of the replica data 602 for the device selection 1708. The replica data 602 can be written to the storage devices 218 following the flow chart from the device selection 1708, as already described.
  • Returning to the flow chart for the decentralized embodiment on the right-hand side of FIG. 18, the request distributor 210 can receive the application request 220. The application request 220 can include the data address 1204, the application data 214, the transfer length 302, and the number 1804 of the replica data 602.
  • The request distributor 210 can send one of the device requests 1202 to one of the storage devices 218. That storage device 218 can perform the address calculation 1710. As an example, the address calculation 1710 can be the same function or perform the same or similar function as described in FIG. 17 and as for the centralized embodiment.
  • In this example, the same storage device 218 can also perform the write non-chunk function 1712. As an example, the write non-chunk function 1712 can be the same function or perform the same or similar function as described in FIG. 17 and as for the centralized embodiment.
  • The flow chart can continue to the replica query 1802. As an example, the replica query can be the same function or perform the same or similar function as described for the centralized embodiment. If the number 1804 for the replica data 602 is not greater than zero, the process to write additional data stops for this particular application request 220.
  • If the replica query 1802 determines that the number 1804 for the replica data 602 is greater than zero, then the flow chart can continue a group selection 1810. The group selection 1810 can select one of the storage devices 218 in the same replica group 1812. The replica group 1812 is a portion of the storage devices 218 of FIG. 2 in the storage group 206 of FIG. 2 designated to be part of a redundancy function for the application data 214 and for in-storage processing. The request distributor 210 can perform the replica query 1802, the group selection 1810, or a combination thereof.
  • The flow chart can continue to a number update 1814. The number update 1814 can decrement the number 1804 for replica data 602 still to be written to the replica group 1812. The decrement amount can be by an integer value, such as one. The request distributor 210 can perform the number update 1814.
  • The flow chart can continue to a request generation 1816. The request generation 1816 generates one of the device requests 1202 to another of the storage devices 218 in the replica group 1812 for writing the replica data 602. The request distributor 210 can perform the request generation 1816.
  • The flow chart can loop back (not drawn in FIG. 18) to the replica query 1802 and iterate until the number 1804 has reached zero. At this point, the replica data 602 has been written to the replica group 1812.
  • For illustrative purposes, the decentralized embodiment is described as operating in a serial manner writing to one of the storage devices 218 at a time, although it is understood that the decentralized embodiment can operate differently. For example, the request distributor 210 can issue a number of device requests 1202 to the storage devices 218 in the replica group 1812 and have the replica data 602 written on multiple storage devices 218 simultaneously before the other storage devices 218 in the replica group completes the write.
  • It has been discovered that the computing system provides efficient distributed processing by providing methods and apparatuses for performing in-storage processing with multiple storage devices, with capabilities for performing in-storage processing of application data. An execution of an application can be shared by distributing the execution among various devices in a storage device. Each of the devices can perform in-storage processing with the application data as requested by an application request.
  • It has also been discovered that the computing system can reduce overall system power consumption by reducing the number of inputs/outputs between the application execution and the storage device. This reduction is achieved by having the devices perform the in-storage processing instead of mere storage, read, and re-store by the application. Instead, the in-storage processing outputs can be returned as an aggregated output from the various devices that performed the in-storage processing back to the application. The application can continue to execute and utilize the in-storage outputs, the aggregated output, or a combination thereof.
  • It has been discovered that the computing system provides for reduced total cost of ownership by providing formatting and translation function of the application data for different configuration or organization of the storage device. Further, the computing system also provides translation for the type of in-storage processing to be carried out by the devices in the storage device. Examples of types of translation or formatting include split, split+padding, split+redundancy, and mirroring.
  • It has been discovered that the computing system provides more efficient execution of the application with less interrupts to the application via the output coordination of the in-storage processing outputs from the storage devices. The output coordination can buffer the in-storage processing outputs and can also sort the order of each of the in-storage processing outputs before returning an aggregated output to the application. The application can continue to execute and utilize the in-storage outputs, the aggregated output, or a combination thereof.
  • It has been discovered that the computing system further minimizes integration obstacles by allowing the devices in the storage group to have different or the same functionalities. As an example, one of the devices can function as the only output coordinator for all the in-storage processing outputs from the other devices. As a further example, the aggregation function can be distributed amongst the devices passing along and performing partial aggregation from device to device until one of the devices returns the full aggregated output back to the application. The application can continue to execute and utilize the in-storage outputs, the aggregated output, or a combination thereof.
  • The modules described in this application can be hardware implementations or hardware accelerators in the computing system 100. The modules can also be hardware implementation or hardware accelerators within the computing system 100 or external to the computing system 100.
  • The modules described in this application can be implemented as instructions stored on a non-transitory computer readable medium to be executed by the computing system 100. The non-transitory computer medium can include memory internal to or external to the computing system 100. The non-transitory computer readable medium can include non-volatile memory, such as a hard disk drive, non-volatile random access memory (NVRAM), solid-state storage group (SSD), compact disk (CD), digital video disk (DVD), or universal serial bus (USB) flash memory devices. The non-transitory computer readable medium can be integrated as a part of the computing system 100 or installed as a removable portion of the computing system 100.
  • Referring now to FIG. 19, therein is shown a flow chart of a method 1900 of operation of a computing system 100 in an embodiment of the present invention. The method 1900 includes: performing in-storage processing with a storage device with formatted data based on application data from an application in a block 1902; and returning an in-storage processing output from the storage device to the application for continued execution in a block 1904.
  • The method 1900 can further include receiving a sub-application request at the storage device based on an application request from the application for performing in-storage processing. The method 1900 can further include sorting in-storage processing outputs from a storage group including the storage device. The method 1900 can further include issuing a device request based on an application request from the application to a storage group including the storage device.
  • The method 1900 can further include issuing a device request from the storage device; receiving the device request at another storage device; generating another device request by the another storage device; and receiving the another device request by yet another storage device
  • The method 1900 can further include sending in-storage processing outputs by a storage group include the storage device to be aggregated and sent to the application. The method 1900 can further include aggregating an in-storage processing output as a partial aggregated output to be returned to the application. The method 1900 can further include generating the formatted data based on the application data. The method 1900 can further include generating a formatted unit of the formatted data with an application unit of the application data and a data pad. The method 1900 can further include generating a formatted unit of the formatted data with non-aligned instances of application units of the application data and a data pad.
  • While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the scope of the included claims. All matters set forth herein or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense.

Claims (20)

What is claimed is:
1. A computing system comprising:
a storage device configured to:
perform in-storage processing with formatted data based on application data from an application; and
return an in-storage processing output to the application for continued execution.
2. The system as claimed in claim 1 wherein the storage device is further configured to receive a sub-application request based on an application request from the application for performing in-storage processing.
3. The system as claimed in claim 1 wherein the storage device is further configured to generate an aggregated output from in-storage processing outputs from one or more other storage devices and return the aggregated output to the application for continued execution.
4. The system as claimed in claim 1 wherein the storage device is further configured to issue a device request based on an application request from the application to at least one of other storage devices.
5. The system as claimed in claim 1 wherein:
the storage device is further configured to:
issue a device request;
further comprising:
another storage device configured to:
receive the device request,
generate another device request; and
yet another storage device configured to receive the another device request.
6. The system as claimed in claim 1 further comprising a storage group including the storage device, configured to send in-storage processing outputs to be aggregated and sent to the application.
7. The system as claimed in claim 1 wherein the storage device is further configured to aggregate an in-storage processing output as a partial aggregated output to be returned to the application.
8. The system as claimed in claim 1 wherein the storage device is further configured to generate the formatted data from the application data.
9. The system as claimed in claim 1 wherein the storage device is further configured to generate a formatted unit of the formatted data with an application unit of the application data and a data pad.
10. The system as claimed in claim 1 wherein the storage device is further configured to generate a formatted unit of the formatted data with non-aligned instances of application units of the application data and a data pad.
11. A method of operation of a computing system comprising:
performing in-storage processing with a storage device with formatted data based on application data from an application; and
returning an in-storage processing output from the storage device to the application for continued execution.
12. The method as claimed in claim 11 further comprising receiving a sub-application request at the storage device based on an application request from the application for performing in-storage processing.
13. The method as claimed in claim 11 further comprising sorting in-storage processing outputs from a storage group including the storage device.
14. The method as claimed in claim 11 further comprising issuing a device request based on an application request from the application to a storage group including the storage device.
15. The method as claimed in claim 11 further comprising:
issuing a device request from the storage device;
receiving the device request at another storage device;
generating another device request by the another storage device; and
receiving the another device request by yet another storage device.
16. The method as claimed in claim 11 further comprising sending in-storage processing outputs by a storage group include the storage device to be aggregated and sent to the application.
17. The method as claimed in claim 11 further comprising aggregating an in-storage processing output as a partial aggregated output to be returned to the application.
18. The method as claimed in claim 11 further comprising generating the formatted data based on the application data.
19. The method as claimed in claim 11 further comprising generating a formatted unit of the formatted data with an application unit of the application data and a data pad.
20. The method as claimed in claim 11 further comprising generating a formatted unit of the formatted data with non-aligned instances of application units of the application data and a data pad.
US14/817,815 2014-12-31 2015-08-04 Computing system with distributed compute-enabled storage group and method of operation thereof Abandoned US20160191665A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US14/817,815 US20160191665A1 (en) 2014-12-31 2015-08-04 Computing system with distributed compute-enabled storage group and method of operation thereof
KR1020150190827A KR102581374B1 (en) 2014-12-31 2015-12-31 Computing system with distributed compute-enabled storage group and operating method thereof
US17/955,310 US20230028569A1 (en) 2014-12-31 2022-09-28 Computing system with distributed compute-enabled storage group and method of operation thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462098530P 2014-12-31 2014-12-31
US14/817,815 US20160191665A1 (en) 2014-12-31 2015-08-04 Computing system with distributed compute-enabled storage group and method of operation thereof

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/955,310 Continuation US20230028569A1 (en) 2014-12-31 2022-09-28 Computing system with distributed compute-enabled storage group and method of operation thereof

Publications (1)

Publication Number Publication Date
US20160191665A1 true US20160191665A1 (en) 2016-06-30

Family

ID=56165750

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/817,815 Abandoned US20160191665A1 (en) 2014-12-31 2015-08-04 Computing system with distributed compute-enabled storage group and method of operation thereof
US17/955,310 Pending US20230028569A1 (en) 2014-12-31 2022-09-28 Computing system with distributed compute-enabled storage group and method of operation thereof

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/955,310 Pending US20230028569A1 (en) 2014-12-31 2022-09-28 Computing system with distributed compute-enabled storage group and method of operation thereof

Country Status (2)

Country Link
US (2) US20160191665A1 (en)
KR (1) KR102581374B1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106598889A (en) * 2016-08-18 2017-04-26 湖南省瞬渺通信技术有限公司 SATA (Serial Advanced Technology Attachment) master controller based on FPGA (Field Programmable Gate Array) sandwich plate
US20170344598A1 (en) * 2016-05-27 2017-11-30 International Business Machines Corporation De-Duplication Optimized Platform for Object Grouping
US20180278459A1 (en) * 2017-03-27 2018-09-27 Cisco Technology, Inc. Sharding Of Network Resources In A Network Policy Platform
US10089209B1 (en) * 2016-05-17 2018-10-02 United Services Automobile Association (Usaa) Systems and methods for locally streaming applications in a computing system
US20210200458A1 (en) * 2017-07-27 2021-07-01 EMC IP Holding Company LLC Storing data in slices of different sizes within different storage tiers
US11379246B2 (en) * 2019-07-24 2022-07-05 EMC IP Holding Company LLC Automatic configuration of multiple virtual storage processors
US20220247515A1 (en) * 2021-02-02 2022-08-04 Cisco Technology, Inc. Signaling of preamble puncturing configuration in a non-high throughput rts/cts exchange
EP4239489A1 (en) * 2022-03-03 2023-09-06 Samsung Electronics Co., Ltd. Parallel processing in computational storage

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11157356B2 (en) * 2018-03-05 2021-10-26 Samsung Electronics Co., Ltd. System and method for supporting data protection across FPGA SSDs

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6145017A (en) * 1997-08-05 2000-11-07 Adaptec, Inc. Data alignment system for a hardware accelerated command interpreter engine
US6711606B1 (en) * 1998-06-17 2004-03-23 International Business Machines Corporation Availability in clustered application servers
US20040186826A1 (en) * 2003-03-21 2004-09-23 International Business Machines Corporation Real-time aggregation of unstructured data into structured data for SQL processing by a relational database engine
US6804676B1 (en) * 1999-08-31 2004-10-12 International Business Machines Corporation System and method in a data processing system for generating compressed affinity records from data records
US20050010709A1 (en) * 2003-04-23 2005-01-13 Dot Hill Systems Corporation Application server blade for embedded storage appliance
US20050071546A1 (en) * 2003-09-25 2005-03-31 Delaney William P. Systems and methods for improving flexibility in scaling of a storage system
US20050125625A1 (en) * 2003-12-09 2005-06-09 Michael Kilian Methods and apparatus for parsing a content address to facilitate selection of a physical storage location in a data storage system
US7047359B1 (en) * 2002-12-17 2006-05-16 Storage Technology Corporation Method, system and product for managing a virtual storage system
US20070220604A1 (en) * 2005-05-31 2007-09-20 Long Kurt J System and Method of Fraud and Misuse Detection
US7636801B1 (en) * 2005-06-20 2009-12-22 Symantec Operating Corporation Coordination of quality of service in a multi-layer virtualized storage environment
US20100185719A1 (en) * 2000-06-26 2010-07-22 Howard Kevin D Apparatus For Enhancing Performance Of A Parallel Processing Environment, And Associated Methods
US20100318549A1 (en) * 2009-06-16 2010-12-16 Florian Alexander Mayr Querying by Semantically Equivalent Concepts in an Electronic Data Record System
US20110041005A1 (en) * 2009-08-11 2011-02-17 Selinger Robert D Controller and Method for Providing Read Status and Spare Block Management Information in a Flash Memory System
US20110296440A1 (en) * 2010-05-28 2011-12-01 Security First Corp. Accelerator system for use with secure data storage
US20120096329A1 (en) * 2010-10-18 2012-04-19 Xyratex Technology Limited Method of, and apparatus for, detection and correction of silent data corruption
US20130019072A1 (en) * 2011-01-19 2013-01-17 Fusion-Io, Inc. Apparatus, system, and method for managing out-of-service conditions
US20140189060A1 (en) * 2013-01-03 2014-07-03 Futurewei Technologies, Inc. End-User Carried Location Hint for Content in Information-Centric Networks
US8892677B1 (en) * 2010-01-29 2014-11-18 Google Inc. Manipulating objects in hosted storage
US20150120991A1 (en) * 2013-10-31 2015-04-30 SK Hynix Inc. Data processing system and operating method thereof
US20150154200A1 (en) * 2013-12-02 2015-06-04 Qbase, LLC Design and implementation of clustered in-memory database
US20150288381A1 (en) * 2014-04-07 2015-10-08 International Business Machines Corporation Compression of floating-point data by identifying a previous loss of precision
US20160004455A1 (en) * 2014-07-04 2016-01-07 Fujitsu Limited Method of controlling splitting of data, and system
US20160337464A1 (en) * 2013-12-11 2016-11-17 Telefonaktiebolaget Lm Ericsson (Publ) Proxy interception

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040066739A1 (en) * 2002-10-07 2004-04-08 Koninklijke Philips Electronics N.V. Simplified implementation of optimal decoding for COFDM transmitter diversity system
KR20110041005A (en) * 2009-10-15 2011-04-21 아쿠아골드 주식회사 Beverage dispensing valve
KR20120096329A (en) * 2011-02-22 2012-08-30 삼성전자주식회사 Integrated system comprising signal analysys circuit

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6145017A (en) * 1997-08-05 2000-11-07 Adaptec, Inc. Data alignment system for a hardware accelerated command interpreter engine
US6711606B1 (en) * 1998-06-17 2004-03-23 International Business Machines Corporation Availability in clustered application servers
US6804676B1 (en) * 1999-08-31 2004-10-12 International Business Machines Corporation System and method in a data processing system for generating compressed affinity records from data records
US20100185719A1 (en) * 2000-06-26 2010-07-22 Howard Kevin D Apparatus For Enhancing Performance Of A Parallel Processing Environment, And Associated Methods
US7047359B1 (en) * 2002-12-17 2006-05-16 Storage Technology Corporation Method, system and product for managing a virtual storage system
US20040186826A1 (en) * 2003-03-21 2004-09-23 International Business Machines Corporation Real-time aggregation of unstructured data into structured data for SQL processing by a relational database engine
US20050010709A1 (en) * 2003-04-23 2005-01-13 Dot Hill Systems Corporation Application server blade for embedded storage appliance
US20050071546A1 (en) * 2003-09-25 2005-03-31 Delaney William P. Systems and methods for improving flexibility in scaling of a storage system
US20050125625A1 (en) * 2003-12-09 2005-06-09 Michael Kilian Methods and apparatus for parsing a content address to facilitate selection of a physical storage location in a data storage system
US20070220604A1 (en) * 2005-05-31 2007-09-20 Long Kurt J System and Method of Fraud and Misuse Detection
US7636801B1 (en) * 2005-06-20 2009-12-22 Symantec Operating Corporation Coordination of quality of service in a multi-layer virtualized storage environment
US20100318549A1 (en) * 2009-06-16 2010-12-16 Florian Alexander Mayr Querying by Semantically Equivalent Concepts in an Electronic Data Record System
US20110041005A1 (en) * 2009-08-11 2011-02-17 Selinger Robert D Controller and Method for Providing Read Status and Spare Block Management Information in a Flash Memory System
US8892677B1 (en) * 2010-01-29 2014-11-18 Google Inc. Manipulating objects in hosted storage
US20110296440A1 (en) * 2010-05-28 2011-12-01 Security First Corp. Accelerator system for use with secure data storage
US20120096329A1 (en) * 2010-10-18 2012-04-19 Xyratex Technology Limited Method of, and apparatus for, detection and correction of silent data corruption
US20130019072A1 (en) * 2011-01-19 2013-01-17 Fusion-Io, Inc. Apparatus, system, and method for managing out-of-service conditions
US20140189060A1 (en) * 2013-01-03 2014-07-03 Futurewei Technologies, Inc. End-User Carried Location Hint for Content in Information-Centric Networks
US20150120991A1 (en) * 2013-10-31 2015-04-30 SK Hynix Inc. Data processing system and operating method thereof
US20150154200A1 (en) * 2013-12-02 2015-06-04 Qbase, LLC Design and implementation of clustered in-memory database
US20160337464A1 (en) * 2013-12-11 2016-11-17 Telefonaktiebolaget Lm Ericsson (Publ) Proxy interception
US20150288381A1 (en) * 2014-04-07 2015-10-08 International Business Machines Corporation Compression of floating-point data by identifying a previous loss of precision
US20160004455A1 (en) * 2014-07-04 2016-01-07 Fujitsu Limited Method of controlling splitting of data, and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Free RAID Recovery, "What is RAID?;" 2009, http://www.freeraidrecovery.com/library/what-is-raid.aspx *
Marks, Howard, "Understanding VMware VAAI Offloads and Unloads," March 26, 2014, https://www.networkcomputing.com/storage/understanding-vmware-vaai-offloads-and-unloads/1578770159 *
Seagate, "Serial ATA Native Command Queuing;" July 2003, https://www.seagate.com/docs/pdf/whitepaper/D2c_tech_paper_intc-stx_sata_ncq.pdf *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11210196B1 (en) 2016-05-17 2021-12-28 United Services Automobile Association (Usaa) Systems and methods for locally streaming applications in a computing system
US11954006B1 (en) * 2016-05-17 2024-04-09 United Services Automobile Association (Usaa) Systems and methods for locally streaming applications in a computing system
US11663108B1 (en) * 2016-05-17 2023-05-30 United Services Automobile Association (Usaa) Systems and methods for locally streaming applications in a computing system
US10089209B1 (en) * 2016-05-17 2018-10-02 United Services Automobile Association (Usaa) Systems and methods for locally streaming applications in a computing system
US10248531B1 (en) * 2016-05-17 2019-04-02 United Services Automobile Association (Usaa) Systems and methods for locally streaming applications in a computing system
US10452512B1 (en) * 2016-05-17 2019-10-22 United Services Automobile Association (Usaa) Systems and methods for locally streaming applications in a computing system
US10824534B1 (en) * 2016-05-17 2020-11-03 United Services Automobile Association (Usaa) Systems and methods for locally streaming applications in a computing system
US20170344598A1 (en) * 2016-05-27 2017-11-30 International Business Machines Corporation De-Duplication Optimized Platform for Object Grouping
CN106598889A (en) * 2016-08-18 2017-04-26 湖南省瞬渺通信技术有限公司 SATA (Serial Advanced Technology Attachment) master controller based on FPGA (Field Programmable Gate Array) sandwich plate
US20180278459A1 (en) * 2017-03-27 2018-09-27 Cisco Technology, Inc. Sharding Of Network Resources In A Network Policy Platform
US20210200458A1 (en) * 2017-07-27 2021-07-01 EMC IP Holding Company LLC Storing data in slices of different sizes within different storage tiers
US11755224B2 (en) * 2017-07-27 2023-09-12 EMC IP Holding Company LLC Storing data in slices of different sizes within different storage tiers
US11379246B2 (en) * 2019-07-24 2022-07-05 EMC IP Holding Company LLC Automatic configuration of multiple virtual storage processors
US11621798B2 (en) * 2021-02-02 2023-04-04 Cisco Technology, Inc. Signaling of preamble puncturing configuration in a non-high throughput RTS/CTS exchange
US20220247515A1 (en) * 2021-02-02 2022-08-04 Cisco Technology, Inc. Signaling of preamble puncturing configuration in a non-high throughput rts/cts exchange
US20230198669A1 (en) * 2021-02-02 2023-06-22 Cisco Technology, Inc. Signaling of preamble puncturing configuration in a non-high throughput rts/cts exchange
US11916670B2 (en) * 2021-02-02 2024-02-27 Cisco Technology, Inc. Signaling of preamble puncturing configuration in a non-high throughput RTS/CTS exchange
EP4239489A1 (en) * 2022-03-03 2023-09-06 Samsung Electronics Co., Ltd. Parallel processing in computational storage
US20230280936A1 (en) * 2022-03-03 2023-09-07 Samsung Electronics Co., Ltd. Parallel processing in computational storage

Also Published As

Publication number Publication date
KR20160081851A (en) 2016-07-08
US20230028569A1 (en) 2023-01-26
KR102581374B1 (en) 2023-09-21

Similar Documents

Publication Publication Date Title
US20230028569A1 (en) Computing system with distributed compute-enabled storage group and method of operation thereof
US11960777B2 (en) Utilizing multiple redundancy schemes within a unified storage element
US10789020B2 (en) Recovering data within a unified storage element
US11340794B2 (en) Multiprocessor system with independent direct access to bulk solid state memory resources
US10001947B1 (en) Systems, methods and devices for performing efficient patrol read operations in a storage system
US6782450B2 (en) File mode RAID subsystem
US10992533B1 (en) Policy based path management
US9417964B2 (en) Destaging cache data using a distributed freezer
US9547446B2 (en) Fine-grained control of data placement
US8538929B2 (en) Archiving de-duplicated data on tape storage media using graph partitions
US9619404B2 (en) Backup cache with immediate availability
US9547616B2 (en) High bandwidth symmetrical storage controller
US8447947B2 (en) Method and interface for allocating storage capacities to plural pools
US9298555B1 (en) Managing recovery of file systems
US11003554B2 (en) RAID schema for providing metadata protection in a data storage system
US9298394B2 (en) Data arrangement method and data management system for improving performance using a logical storage volume
US20190018593A1 (en) Efficient space allocation in gathered-write backend change volumes
US11526447B1 (en) Destaging multiple cache slots in a single back-end track in a RAID subsystem
US20160224250A1 (en) Data set management
US11157198B2 (en) Generating merge-friendly sequential IO patterns in shared logger page descriptor tiers
US9176681B1 (en) Managing provisioning of storage in storage systems
US20230229363A1 (en) Tiering Valid Data after a Disaster Recovery Operation
US10416924B1 (en) Identifying workload characteristics in dependence upon storage utilization
US11256428B2 (en) Scaling raid-based storage by redistributing splits

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, DEMOCRATIC P

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KANG, YANGWOOK;KI, YANG SEOK;PARK, DONGCHUL;REEL/FRAME:036249/0557

Effective date: 20150728

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCV Information on status: appeal procedure

Free format text: APPEAL BRIEF (OR SUPPLEMENTAL BRIEF) ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: TC RETURN OF APPEAL

STCV Information on status: appeal procedure

Free format text: ON APPEAL -- AWAITING DECISION BY THE BOARD OF APPEALS

STCV Information on status: appeal procedure

Free format text: BOARD OF APPEALS DECISION RENDERED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION