US20140230342A1 - Building support with concealed electronic component for a structure - Google Patents

Building support with concealed electronic component for a structure Download PDF

Info

Publication number
US20140230342A1
US20140230342A1 US14/242,905 US201414242905A US2014230342A1 US 20140230342 A1 US20140230342 A1 US 20140230342A1 US 201414242905 A US201414242905 A US 201414242905A US 2014230342 A1 US2014230342 A1 US 2014230342A1
Authority
US
United States
Prior art keywords
data
storage
building support
substrate
building
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US14/242,905
Inventor
Christopher P. Macioch
Michael J. Pingitore
Frank C. Pingitore
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CFM Global LLC
Original Assignee
CFM Global LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CFM Global LLC filed Critical CFM Global LLC
Priority to US14/242,905 priority Critical patent/US20140230342A1/en
Publication of US20140230342A1 publication Critical patent/US20140230342A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • EFIXED CONSTRUCTIONS
    • E04BUILDING
    • E04HBUILDINGS OR LIKE STRUCTURES FOR PARTICULAR PURPOSES; SWIMMING OR SPLASH BATHS OR POOLS; MASTS; FENCING; TENTS OR CANOPIES, IN GENERAL
    • E04H14/00Buildings for combinations of different purposes not covered by any single one of main groups E04H1/00-E04H13/00 of this subclass, e.g. for double purpose; Buildings of the drive-in type
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/14Mounting supporting structure in casing or on frame or rack
    • H05K7/1485Servers; Data center rooms, e.g. 19-inch computer racks
    • H05K7/1497Rooms for data centers; Shipping containers therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3034Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a storage system, e.g. DASD based or network based
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3452Performance evaluation by statistical analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/14Charging, metering or billing arrangements for data wireline or wireless communications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/24Accounting or billing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Definitions

  • Embodiments of the present invention generally relate to a concealed building support, and, in particular, to an apparatus, system and method for providing a concealed building support that is adapted to house or support an electronic component.
  • Digital data has traditionally been stored locally in a storage asset associated with a computer that generated the data, e.g., on the computer's hard drive, or stored onto magnetic, optical, and/or removable storage media such as CDs, DVDs, removable flash drives, floppy disks, and so forth.
  • a single storage asset, or a small number of clustered storage assets may consume a relatively modest amount of energy, and/or generate a modest amount of heat, and/or generate little attention or scrutiny from outsiders.
  • the digital data may have traditionally been stored nearby, such as memory accessible to a server on a local area network (“LAN”).
  • LAN local area network
  • RTCP real-time transport control protocol
  • Cloud-based storage shifts the burden of maintaining data storage assets to a central manager, e.g., a conventional data warehouse and warehouse operator.
  • Cloud-based storage typically requires a relatively large number of storage assets. economiess of scale may be achieved for some aspects of operation, such as having dedicated technical support available in order to tend to hardware failures, enforce security or backup policies, and so forth.
  • a concentration of storage assets in a data warehouse may create problems that exceed a tolerable level unless mitigated.
  • an N-fold increase in the number of storage assets may by themselves cause an N-fold increase in power consumption and heat generation, which in turn requires higher-capacity climate control equipment and concomitant further increases in energy consumption for cooling.
  • Modern data warehouses have become massive facilities that consume large amounts of power, large plots of lands, and require high-capacity communication trunks to support the data traffic.
  • the conventional data warehouse draws attention to itself due to its physical size, the value of the data stored within it, and the threat of business disruption if the data warehouse were to be attacked or otherwise suffer a failure.
  • Conventional wisdom teaches away from a disfavored maxim known as “security through obscurity,” which holds that sufficient security of an asset may be achieved by attempting to hide the asset without the need for overwhelming security protection.
  • the data warehouse requires increased physical security in order to guard against criminals, terrorists and similar threats.
  • the required infrastructure of a conventional data warehouse i.e., to supply the electrical energy, to supply cooling capacity, to supply communication network capacity, and to supply physical security, increases the cost and eco-footprint of operating a data warehouse and may not be appealing to eco-conscious consumers or consumers who seek a lower cost to store data in the Cloud.
  • Embodiments in accordance with the present invention avoid the drawbacks of the known art by providing a dispersed, distributed file system in order to host Cloud-based storage.
  • Storage nodes which may be individual hard drives or clusters of co-located hard drives, may be dispersed and located within buildings that are not ordinarily used for data warehouses, such as ordinary homes, office buildings, retail locations, and so forth.
  • Storage nodes may be located within unobtrusive but otherwise unused space of the buildings, such as (in the case of an ordinary home) structural space and/or open interior space that is enclosed by the structure.
  • Open interior space may include attic space, basement space, and so forth.
  • Structural space may include space within the structure itself, such as space within walls, space under floorboards, and so forth. Structural space is often closed off with limited physical access compared to open interior space.
  • the dispersed, redundant, fault-tolerant and unobtrusive nature of the storage nodes reduces the need for electrical power, environmental control, communication speeds, and elaborate security measures.
  • Building owners are encouraged to host storage nodes in their properties and participate in ongoing operation of a storage network, by receiving fees or other monetary incentives (e.g., royalty checks, discount coupons from local merchants, etc.), or non-monetary incentives (e.g., preferred memberships in a business such as a local gym, charitable donations in their name, etc.).
  • fees or other monetary incentives e.g., royalty checks, discount coupons from local merchants, etc.
  • non-monetary incentives e.g., preferred memberships in a business such as a local gym, charitable donations in their name, etc.
  • Embodiments in accordance with the present invention may be marketed to data farmers and Primary Source Data Clients as a “green” (i.e., eco-friendly) product.
  • a green i.e., eco-friendly
  • embodiments use substantially less electricity.
  • Conventional data storage centers typically use hard drive storage, which use spinning motorized drives that are constantly powered.
  • embodiments may utilize solid state technologies for reduced power consumption and reduced heat production required for storage. With the use of more efficient technology and the dispersal of individual storage assets, large data centers will not be required. The elimination of these centers would in turn eliminate the need for large heating, ventilation and air conditioning (“HVAC”) equipment and their resultant large power demands.
  • HVAC heating, ventilation and air conditioning
  • Embodiments in accordance with the present invention may provide a system and method for distributed file storage, the system including: a plurality of data farms, each data farm including: a data storage module; a local control module comprising a data protection module; and a communication interface between said data farm and a wide-area network; an interface to one or more remote data applications; and an administrative module configured to record a quantity of data received or transmitted by the communication interface of the data farm.
  • the method may include: providing a plurality of data farms; accepting data from a remote data application; selecting a data farm from among the plurality of data farms in which to store the data; and storing the data in the selected data farm.
  • Embodiments in accordance with the present invention may provide a building support with a concealed electronic component for a structure, including: a rigid support member; a mounting attachment affixed to the rigid support member, the mounting attachment adapted to support an electronic component; and a transceiver coupled to the electronic component, the transceiver adapted to support an external communication link.
  • Embodiments in accordance with the present invention may provide a backing material to support an electronic component concealed within a building structural element, wherein the building structural element comprises one or more rigid building support members, the backing material including: a substrate; a structure attachment along at least one surface of the substrate, the attachment adapted to attach the substrate to a rigid building support member; and one or more electronic component attachments disposed on a major surface of the rigid substrate.
  • FIG. 1 is a block diagram depicting a distributed file system in accordance with an embodiment of the present invention
  • FIG. 2 illustrates at a high level of abstraction a server of a distributed file system, in accordance with an embodiment of the present invention
  • FIG. 3 illustrates an exemplary structural storage location, in accordance with an embodiment of the present invention
  • FIG. 4 illustrates a housing for a storage apparatus, in accordance with an embodiment of the present invention.
  • FIG. 5 illustrates a cross sectional view of a plurality of housings deployed at a data farm, in accordance with an embodiment of the present invention.
  • the disclosure will be illustrated below in conjunction with an exemplary communication system. Although well suited for use with, e.g., a system using a server(s) and/or database(s), the disclosure is not limited to use with any particular type of communication system or configuration of system elements. Those skilled in the art will recognize that the disclosed techniques may be used in any communication application in which it is desirable to utilize a low-cost and low-overhead distributed file system.
  • module refers generally to a logical sequence or association of steps, processes or components.
  • a software module may comprise a set of associated routines or subroutines within a computer program.
  • a module may comprise a substantially self-contained hardware device.
  • a module may also comprise a logical set of processes irrespective of any software or hardware implementation.
  • Non-volatile media includes, for example, NVRAM, or magnetic or optical disks.
  • Volatile media includes dynamic memory, such as main memory.
  • Computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, magneto-optical medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, solid state medium like a memory card, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
  • Computer-readable medium may also include volatile or non-volatile emerging storage media such as data encoded in chemical or organic-chemical cells, and holographic cells.
  • a digital file attachment to e-mail or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium.
  • the computer-readable media is configured as a database
  • the database may be any type of database, such as relational, hierarchical, object-oriented, and/or the like. Accordingly, the disclosure is considered to include a tangible storage medium or distribution medium and prior art-recognized equivalents and successor media, in which the software implementations of the present disclosure are stored.
  • Embodiments in accordance with the present invention provide a system for data storage and backup that will utilize existing underutilized or “wasted” spaces, voids, etc. inside industrial, commercial and residential buildings in order to generate a new source of data storage space and create a symbiotic data storage relationship between big corporations, small business, homeowners, and data servicers.
  • the data farms hosts may receive income for storing the data within their structures. They may provide a dedicated high-speed internet connection for the storage system, thus allowing fast access and download of backup information as well as the retrieval of documentation at any time.
  • Embodiments in accordance with the present invention will provide to data generators and data users (e.g., corporate data users) an alternate physical location to store their digital data and backups. These new digital storage locales will provide to their users an offer of multiple backups around the world. This will further protect corporations' backups from natural disasters or attacks, both physical and cyber.
  • data generators and data users e.g., corporate data users
  • the data storage devices will be installed in “empty” spaces or “voids” in industrial, residential and commercial structures. Such hidden, discreet or unobtrusive locations may include, but are not limited to, cavities inside the wall space, attic space, heating ventilation and air conditioning (“HVAC”) ducts, conduit, etc.
  • HVAC heating ventilation and air conditioning
  • the data storage devices may include solid state storage units within a protective enclosure, which are then installed in discreet locations. If the data storage device is installed within an exterior wall or other boundary with an area that is not temperature-controlled, the device sheathing or housing will tend to reduce such variations.
  • the storage devices may also be incorporated into a number of construction materials to utilize the mass of the structure for data storage.
  • voids that may be pre-formed into concrete slabs, planks, studs, etc., and so forth, i.e., substantially any place that a void exists (either natural or planned).
  • Existing structures could be retrofitted and new construction could use the building materials prefabricated with data storage devices.
  • the data storage units may be interconnected and gridded for optimal flow of data and storage throughout the structure while consuming less energy than traditional storage facilities.
  • the data sent and stored to these data storage devices would be managed and maintained by a third party data servicing company.
  • the third party providers would coordinate backups between corporations seeking backup security and the “housers” of the storage units (i.e., the data farmer). Charges may be based on the size of the system and the frequency in which information is retrieved for restore purposes. Charges may also be based on how much data is transferred to or from the “housers” on a daily basis.
  • Data security is an important consideration related to the transfer and storage of the data.
  • the data may be encrypted by a third-party provided so if an unauthorized entity attempts to access data stored in a data farm, the unauthorized entity would not be able to decipher the data.
  • Access by proxy may be allowed, wherein a user or process may access the data in a data farm on behalf of a end user, decrypt or otherwise process the retrieved data, then send the processed data to the end user.
  • Improved security is provided at data farms by limiting knowledge at data farms of their existence and operation only to persons having a need-to-know at the data farm. A data farmer will not know whose information is being routed through their storage devices, nor will the end-client know precisely at what locations or which data farms their information is stored. This provides the invention's security through obscurity.
  • FIG. 1 illustrates at a high level of abstraction a system 100 in accordance with an embodiment of the invention.
  • System 100 includes a plurality of computing nodes 152 - 1 . . . 152 -M hosting one or more user applications 102 - 1 . . . 102 -M, a plurality of data farms 104 - 1 . . . 104 -N, and a server 106 , interconnected as shown through a wide area network (“WAN”) 101 such as the Internet.
  • WAN wide area network
  • An individual but unspecific user application may be referred to herein as user application 102 - m or as user application 102 .
  • An individual but unspecific data farm may be referred to herein as data farm 104 - m or as data farm 104 .
  • Computing note 152 may include a server coupled to a memory and associated internal and/or external communication interfaces in order to support user application 102 .
  • Server 106 may be a software-controlled system including a processing unit (CPU), microprocessor, or other type of digital data processor executing software or an Application-Specific Integrated Circuit (ASIC) as well as various portions or combinations of such elements.
  • Server 106 may further include a storage network module 110 and/or an administrative module 112 .
  • FIG. 2 depicts a distributed file system (“DFS”) 200 according to an embodiment of the present disclosure, with an emphasis on depicting exemplary components of server 106 at a lower level of abstraction.
  • DFS 200 may include a server 106 that is in communication, via a (typically untrusted or unsecure or public) WAN 101 , with one or more external computing nodes 152 .
  • the external computing nodes 152 are not under the direct control of the enterprise administering the server 106 and/or have a decreased level of trust with the server 106 as compared with communication devices 236 - 1 . . . 236 -K that are within the server 106 .
  • 236 -K may include a local terminal or similar interface to provide direct, local control of server 106 .
  • Exemplary types of external computing nodes 152 include, without limitation, laptops, Personal Computers (PCs), Personal Digital Assistants (PDAs), gateways to other LANs or WANs, and the like.
  • the server 106 may include a boundary device 216 including a server table 220 , one or more internal communication devices 236 - 1 . . . 236 -K, one or more application servers 244 which may be capable of providing one application 248 or a set of different applications 252 , a number of other servers 256 to provide other functions of server 106 , and an enterprise database 260 , all of which are interconnected by a (trusted or secure or private) Local Area Network (LAN) 264 .
  • LAN Local Area Network
  • Some or all of the functions depicted in FIG. 2 may be co-hosted and/or co-resident on a single server.
  • the depiction of components in FIG. 2 is generally intended to be a logical depiction of the components of the system 200 .
  • the LAN 264 can be secured from intrusion by untrusted parties by a gateway and/or firewall located between the LAN 264 and WAN 101 .
  • the boundary device 216 may include the functionality of the gateway and/or firewall.
  • a separate gateway or firewall may be provided between the boundary device 216 and WAN 101 .
  • network boundary device 216 is responsible for initially routing communications within the server 106 for servicing a particular user involved in accessing the DFS.
  • Communications server 244 with enterprise database 260 may perform the functions of storage network module 110 .
  • application servers 244 Although only two application servers 244 are depicted, one skilled in the art will appreciate the one, two, three, or more applications servers 244 can be provided and each server may be configured to provide one or more applications.
  • the applications provided by a particular application server 244 may vary depending upon the capabilities of the server 244 and in the event that a particular application server 244 comprises a set of applications 252 , one, some, or all of the applications in that set of applications 252 may be included in a particular application sequence.
  • Application server 244 may be used to perform the functions of administration module 112 .
  • each data farm 104 - n may include a storage apparatus 126 and a local control module 124 .
  • Storage apparatus 126 may include substantially any type of computer-readable medium.
  • Local control module 124 provides a communication interface between WAN 101 and storage apparatus 126 .
  • Local control module 124 may further provide firewall, gateway, routing functions, administrative and local processing control of its associated data farm 104 .
  • Local control module 124 acts as a server for its associated data farm 104 .
  • At least some of data farms 104 - n may differ from one another based upon factors such as the type of storage technology used for storage apparatus 126 , its associated latency, and the speed and/or latency of its associated communication link to WAN 101 .
  • at least some of computing nodes 152 - m may differ from one another at least in computing throughput and in the speed and/or latency of its associated communication link to WAN 101 . Therefore, system 100 may be able to accommodate a heterogeneous and geographically diverse network environment, unlike some systems of the known art in which each of data farm 104 - n may represent, e.g., a rack of storage units in a conventional data farm.
  • System 100 may be useful for storage applications in which relatively higher latencies and tolerances may be tolerable, such as for a storage application that is used to infrequently access data stored for archival backup purposes.
  • a portion of memory associated with computing node 152 may be usable as part of a data farm 104 for a different computing node 152 .
  • the plurality of data farms 104 together provide a distributed file system used by one or more of user applications 102 .
  • User applications 102 write data to the DFS and/or read data from the DFS, and are thus data users.
  • the DFS optionally operates under the control of server 106 , and in particular under the control of storage network module 110 .
  • the DFS is designed to store very large data sets reliably, and to stream those data sets to user applications 102 .
  • a large DFS may include thousands of data farms 104 . By distributing storage and local processing control across many servers, the DFS may grow with demand while remaining economical at substantially every size.
  • Hadoop provides a framework for an analysis and transformation of very large data sets using a MapReduce paradigm.
  • data and computation may be partitioned across thousands of data farms 104 .
  • a Hadoop-based DFS may scale storage capacity and I/O bandwidth by simply adding additional data farms 104 .
  • the Server 106 may implement Hadoop NameNode functions, and data farms 104 may implement DataNode functions. Under Hadoop, the NameNode is important to the operation of the DFS. The NameNode keeps a directory tree of all files in the file system, and tracks where across the cluster the file data is kept. The NameNode does not store the data of these files itself.
  • User applications 102 communicate with the NameNode whenever the user application 102 attempts to locate a file in the DFS, or when the user application 102 attempts to add/copy/move/delete a file in the DFS.
  • the NameNode responds the successful requests by returning a list of relevant DataNode servers where the data is stored.
  • the NameNode should be a highly reliable computing element, since if the NameNode fails then the DFS will go offline.
  • An optional secondary NameNode may be used to provide protection if a primary NameNode fails.
  • the NameNode should be hosted on a server having a large amount of RAM space.
  • the DFS stores data in DataNodes.
  • the DFS has numerous DataNodes, and data may be replicated across more than one DataNode.
  • a DataNode connects to the NameNode and is then ready to support DFS operations.
  • User applications 102 may communicate directly to a DataNode after the NameNode has provided the location of the data.
  • DataNodes may communicate with each other, such as if they are replicating data. There is usually no need to use RAID storage for DataNode data, because data is designed to be replicated across multiple data farms 104 , rather than multiple disks on the same data farm 104 .
  • the distributed file system may be based on other protocols known in the art, such as BitTorrent, PVFS, or Lustre.
  • Lustre is known as a storage-architecture for data clusters.
  • the central component is the Lustre file system, a shared file system for clusters.
  • the Lustre file system is currently available for Linux and provides a POSIX-compliant UNIX file system interface.
  • Storage network module 110 may not be needed if a peer-to-peer protocol such as BitTorrent is used.
  • Each farm of data farms 104 may have characteristics such as capacity, underlying storage technology, storage latency, communication latency, local controller capabilities, etc., that are independent of similar characteristics at other farms of data farms 104 . These characteristics may vary significantly from one data farm 104 to another. In this way, the DFS is operable with a heterogeneous collection of data farms.
  • Each of data farms 104 may be located in widely dispersed locations, such as in discreet and unobtrusive locations in structures located substantially anywhere nationwide or worldwide, so long as it is reachable through WAN 101 .
  • a structure such as a house typically has hidden spaces that are physically large enough to install a computer-readable medium.
  • Such computer-readable medium could be installed during construction or retrofitted later, for use as a data farm 104 - m .
  • FIG. 3 illustrates an internal view of a wall 300 without attached drywall.
  • the wall typically includes a horizontal top plate, a horizontal bottom plate, and a plurality of vertical wall studs.
  • the wall usually includes wiring as part of the electrical system, and sometimes also includes separate wiring for an Ethernet network. Access to the electrical system external to the wall may be provided by an electric plug socket.
  • Other discreet and unobtrusive locations within a typical house include within or between floor joists, basement areas, attic areas, under a stairway, hollow core doors, etc.
  • other discreet and unobtrusive locations may be available, such as under a raised floorboard, above a false ceiling, within modular walls, and so forth.
  • a low-latency storage apparatus 126 may include a portable disk drive or a solid state drive, which are commonly available in sizes such as approximately 3′′ ⁇ 4′′ ⁇ 1′′.
  • a high-latency storage apparatus 126 may include a USB flash drive, which are commonly available in sizes such as approximately 2′′ ⁇ 0.75′′ ⁇ 0.4′′.
  • a tablet-based local controller 124 may be approximately 7′′ ⁇ 4′′ ⁇ 1′′.
  • data farm 104 may be placed outside such as on a rooftop, atop a telephone pole, or incorporated into renewable energy systems (e.g. solar panels, wind turbines, etc.). Such locations may also rely upon renewable power (e.g., solar panels, wind turbine, etc.) with battery backup for electrical power, and WiFi signals (either public or as provided by a data farmer) for a communication link. Space in other structures such as garages and sheds may also be used. Structural materials (e.g., studs) may be pre-configured to include storage modules.
  • renewable energy systems e.g. solar panels, wind turbines, etc.
  • WiFi signals either public or as provided by a data farmer
  • Such pre-configured structural materials may be suited if retrofitting a structure to include embodiments in accordance with the present invention, with minimal impact to other aspects of usage of the structure.
  • Outdoor locations and other uncontrolled environments may require local controllers 124 and storage apparatus 126 that are ruggedized for the expected temperature swings and protected against wind, rain, and other elements.
  • a higher-latency storage apparatus 126 may include one or more USB flash drives. Flash drives are commonly available in capacities ranging from 2 GB to 64 GB and higher.
  • a controller such as an Android tablet or similar compact computing device may include a USB port. The USB port may be further fanned-out by use of one or more USB hubs. Flash drives may then be connected to one or more of the USB ports. Usage of flash drives for the storage apparatus has an added benefit of typically consuming less power than usage of a portable disk drive.
  • individual storage units having an appropriate USB ports may be daisy-chained together, in order to provide a composite storage apparatus 126 having more data storage capacity than an individual storage unit.
  • a data farm 104 may include an expandable data bus. Individual storage units may be added to data farm 104 by connecting the additional storage unit to the data bus. Expandability of an individual data farm 104 may also be achieved by usage of wireless communication methods, which may be inherently expandable. Wireless methods may include WiFi (IEEE 802.11) and short-range communication such as Bluetooth (IEEE 802.15.1), Zigbee (IEEE 802.15.4), etc.
  • All storage apparatus 126 and associated local control units 124 at data farm 104 should be substantially concealed. A person should not be aware that they were within a data farm 104 .
  • Data farm 104 may be used as primary data storage or as backup data storage for remote clients. Higher latency storage devices may be more useful for backup storage applications.
  • a storage enclosure 302 may be located within the wall 300 .
  • Storage enclosure includes the components of data farm 104 .
  • Physical characteristics of storage enclosure 302 may be adapted to the specific installation location. For example, the size, shape, capacity, etc. of storage enclosure 302 , and the mounting or support it provides to components of data farm 104 , may be adapted to the available space.
  • Storage enclosure 302 may not be fully enclosed, e.g., a top direction may be left open for ease of maintenance and because ordinarily for mounting locations within a wall there is little risk of unwanted physical intrusion from the top.
  • Enclosure requirements may vary from one installation site to another, based on factors such as the installation location and the method of securing the enclosure and/or data farm.
  • Data farms 104 having sufficiently low-power electrical power consumption may be powered by wireless methods and systems such as inductive power coupling.
  • An inductive power system includes a transmitter coil and a receiver coil. Both coils form a system of magnetically coupled inductors. An alternating current in the transmitter coil generates a magnetic field which induces a voltage in the receiver coil. This voltage can be used to power a sufficiently low-power data farm 104 .
  • the transmitting coil may be located on the room-facing side of a drywall wall, and the receiving coil on the interior-facing side of the drywall wall.
  • a data farm e.g., a controller or hub
  • other components e.g., USB devices
  • USB link or other communication link
  • data farm 104 should include a communication interface 128 to WAN 101 .
  • the communication link may include one or more embodiments such as: an Ethernet interface to a broadband access point (e.g., a hub and/or router); a wireless interface (e.g., WiFi) to a host-supplied broadband access point (e.g., a wireless router); a WiFi interface to a public WiFi hotspot; a 4G wireless cell phone interface to a cell phone communication tower; USB link; a fiber optic link; a wireless (free-space) optic link; laser, and so forth. Power may be conserved by placing the data farm 104 in a receive-only or listen-only mode, until the data farm 104 needs to respond (e.g., providing data, responding to a ping, etc.).
  • FIG. 4 illustrates a housing 400 for a storage apparatus 126 in accordance with an embodiment of the present invention.
  • housing 400 is illustrated as a parallelepiped with six surfaces (including two major surfaces and four edge surfaces), the shape of housing 400 is not constrained to such a shape. Alternative shapes having other numbers of surfaces may be used, such as more surfaces (e.g., a geodesic shape, solid hexagon, etc.) or fewer surfaces (e.g., sphere, cone, pyramid, etc.).
  • housing 400 need not fully enclose a space.
  • housing 400 may include one open side.
  • housing 400 may include only one surface, configured to mount storage apparatus to one side of the mounting surface, and the mounting surface being attached on its other side to a rigid base.
  • Housing 400 may include one or more transceivers 402 configured to communicate in at least a 90 degree angle around housing 400 .
  • multiple transceivers 402 are used to provide 360-degree communication in a plane surrounding housing 402 .
  • multiple transceivers 402 may be used to provide spherical or hemispherical coverage around housing 402 .
  • FIG. 4 illustrates transceivers 402 located on two of the three visible planar surfaces of housing 400 .
  • Transceivers 402 may be located at substantially any exterior points of housing 400 , including a surface, and edge, and/or a corner.
  • transceivers 402 may be mounted internally to housing 402 , either behind a surface of housing 400 that is transparent to the wireless signal (e.g., a glass window for optical signals), or configured to transmit through an open side of housing 400 .
  • Transceivers 402 may be used to communicate wirelessly with storage apparatus 126 within housing 400 . Communication may including: sending and/or receiving data to be stored in, or retrieved from, storage apparatus 126 ; controlling storage apparatus 126 ; and/or transmitting status of storage apparatus 126 . Transceivers 402 may use substantially any wireless communication technology, such as RF (e.g., WiFi, ZigBee, Bluetooth, etc.), infrared, optical (e.g., LED or laser); ultrasound, etc.
  • RF e.g., WiFi, ZigBee, Bluetooth, etc.
  • optical e.g., LED or laser
  • ultrasound etc.
  • housing 400 there may be included a securing apparatus to securely attach storage apparatus 126 to housing 400 .
  • Securing apparatus may include screws, bolts, adhesive, tie-straps, and so forth.
  • Housing 400 may further include a controller configured to read/write data to/from the storage apparatus 126 .
  • the controller may be further configured to provide a communication interface via transceivers 402 .
  • the controller may be further configured to provide a status or state of health via transceivers 402 , either periodically or in response to a query.
  • FIG. 5 illustrates a cross-sectional view 500 of a plurality of housing 400 installed within a wall, in accordance with an embodiment of the present invention.
  • View 500 illustrates three wall studs 502 , but persons of skill in the art will understand how to extend view 500 to more than three studs.
  • An individual housing 400 may be in communicative contact with at least one neighboring housing 400 .
  • a backing material 504 may be provided, upon which at least some of the plurality of housings 400 may be mounted.
  • Backing material 504 may be a mesh, webbing, solid board, combination thereof, and so forth, that is sufficiently strong in order to maintain the positions of housings 400 relative to one another, so that communications with a neighboring housing 400 via transceivers 402 may be supported.
  • Backing material 504 allows for a plurality of housings 400 to be installed in advance on backing material 504 (e.g., at a factory), then attaching the backing material populated with housings 400 to studs 502 .
  • Backing material 504 may be large enough to be secured on opposite sides to studs 502 that are separated by a standard distance as known in the building construction arts.
  • Aperture 508 may allow for housings 400 on opposite sides of stud 502 to communicate with one another.
  • housing 510 and housing 512 on opposite sides of the center stud 502 of FIG. 5 , may be in communicative contact with each another.
  • the communicative contact may be by wired or wireless methods.
  • a backing material 506 may be provided that is attached on at least one side 516 to a secure object such as stud 502 , and having at least one other side 518 that is configured to be expandable.
  • side 518 is illustrated opposite of side 516 , side may be positioned substantially anywhere along a perimeter or major surface of backing 506 , or portion thereof, so long as the attachment of side 516 to a secure object is not impaired.
  • Side 518 may include a locking apparatus 514 (e.g., interlocking protrusions) that are configured to interlock with matching voids of an expansion board (not shown in FIG. 5 ), thereby providing expandability.
  • the expansion board may be substantially similar to backing 506 .
  • the plurality of housings 400 may be in communicative contact with a controller 520 , which in turn is communicatively connected to WAN 101 .
  • the plurality of housings 400 may communicate by use of protocols known in the art of data networking, such as a flood protocol.
  • housings 400 may be able to report on a state of health or state of failure of storage apparatus 126 within the respective housing 400 , along with an identifier of the housing 400 reporting the health or failure.
  • a failure map may be generated and made available to maintenance personnel.
  • the failure map may provide a graphical depiction of a specific housing 400 that is reporting a failure (or lack of reporting of good health).
  • the failure map may facilitate repairs by identifying failed storage apparatus 126 for quick replacement.
  • each dispersed data farm 104 should include a data protection module such as a firewall, anti-virus processes, and so forth. Data protection modules may be implemented in local controller 124 and/or communication interface 128 .
  • Each data farm 104 includes a trusted environment behind its respective data protection module but, as among separate and different data farms 104 , the separate data farms 104 are in an untrusted data relationship.
  • the plurality of housings 400 may be mounted on a visible surface, rather than inside a wall.
  • a plurality of housings 400 may be attached to a visible surface of a wall in order to provide an artistic display.
  • the plurality of housings 400 may be secured to the wall by conventional apparatus such as screws, bolt, clamps, welds, adhesive, Velcro, and so forth.
  • storage network module 110 may attend to control aspects of operating the DFS, and administrative module 112 may attend to billing and credit aspects of operating the DFS.
  • individual local controllers 124 - n in associated data farms 104 - n may attend to control aspects of operating the DFS, and to monitoring of data bandwidth usage through the associated node 152 - m or data farm 104 - n for billing and administrative purposes. Usage data so collected may be reported to administrative module 112 , which may then attend to billing and credit (i.e., compensation) aspects of operating the DFS.
  • a central administrative node e.g., a DFS based on BitTorrent or similar
  • individual local controllers 124 - n in associated data farms 104 - n in cooperation with controllers and storage in computing nodes 152 - m , may attend to control aspects of operating the DFS, and to monitoring of data bandwidth usage through the associated node 152 - m or data farm 104 - n for billing and administrative purposes. Usage data so collected may be reported to administrative module 112 , which may then attend to billing and credit (i.e., compensation) aspects of operating the DFS.
  • Communication service and/or electrical power to data farm 104 may be disrupted at certain times. For example, downed trees may cause disruption to electrical or Internet connectivity, or a homeowner may perform renovation work that unwittingly affects the data farm hardware, or a homeowner may change broadband communication service providers, or may decide to let such service lapse (such as if moving), or the service may be disconnected, and so forth. Many such scenarios are possible. Consequently, system operation of the DFS should be resilient to failure or disconnection of individual data farms 104 from the network. Techniques to ensure resiliency are known in the art and include data replication, data striping, RAID storage, error correction codes, etc.
  • system resiliency may be achieved by replicating multiple copies of data throughout the DFS, such that each data item is stored on two or more data farms 104 .
  • a system controller such as storage network module 110 may monitor a state of health of one or more data farms. Monitoring may be useful to determine utilization of the data farm, whether the data farm is online or offline, error conditions, and so forth. Monitoring a data farm may include periodically ping each data farm 104 to determine if it is still functional. Alternatively, each data farm 104 may be configured to provide a periodic heartbeat signal to the system controller. The heartbeat may include status information. The system controller may keep track of whether each data farm 104 is online or offline.
  • the system controller may then replicate the data items that had been stored on the offline data farm 104 , by contacting the functioning data farms 104 that are storing copies of the data items. Copies of the data items may they be stored on one or more additional data farms 104 , either by first sending the data items to system controlled 110 for redistribution, or by sending the data items directly to the selected data farms 104 .
  • Embodiments in accordance with the present invention include a method of operating a DFS such that property owners are compensated for housing a data farm 104 , and users of computing nodes 152 - m obtain the benefit of a secure and resilient DFS at a lower overall cost than from a traditional highly secure data farm.
  • a method of operating the DFS may involve actions by entities such as: a user of a computing node 152 - m (referred to herein as a “Primary Source Data Client”); a property owner associated with data farm 104 - n (referred to herein as a “data farmer”); and a network operator associated with administration module 112 (referred to herein as a “data utility” or “data utility company”).
  • entities such as: a user of a computing node 152 - m (referred to herein as a “Primary Source Data Client”); a property owner associated with data farm 104 - n (referred to herein as a “data farmer”); and a network operator associated with administration module 112 (referred to herein as a “data utility” or “data utility company”).
  • a Primary Source Data Client may be billed on the amount of system resources used, e.g., on the number of megabytes of data written to and/or read from the DFS, or based upon an amount of memory space used by the client, and so forth. This pricing model may be useful for clients that do not anticipate storing much data in the DFS. Data utility company accounting may charge for rewrite processes only for service and data transfer initiated by the client, and not for transfers caused by internal operation of the DFS (such as adjusting location of data storage as data farms come online and go offline).
  • a Primary Source Data Client may be billed on a flat fee schedule (e.g., a monthly fee), or a hybrid billing model (e.g., a flat fee up to a predetermined limit, with a per-megabyte fee above the limit). Billing may also be divided into separate fees for transmission and for storage. Client will be allowed a certain amount of monthly data transfer (writes and rewrites) along with a lease of specific amounts of data storage. Different tiered memberships may be available to meet the needs of subscription-based clients. Tiers may be structured based on factors such as storage capacity used, bytes of data transfer used, speed of data access (latency or communication speed), the amount of times backups are replicated, to what regions replications are sent to, and so forth. Client-initiated rewrites or stored data would only use the allotted amount of data transfer limits associated with the client's specific subscription tier. Exceeding these limits would result charges based on overage rates at premium costs.
  • a flat fee schedule e.g.,
  • a Primary Source Data Client may also be billed based on any number of factors or optional value-added services, such as the degree of replication or redundancy, regionalization (i.e., dispersal) of their data, strength of encryption, etc. For example, a Primary Source Data Client preferring a higher degree of data security may choose to have their data replicated on a greater number of data farms 104 , with a concomitant greater billing.
  • a Primary Source Data Client may prefer to use the DFS only for emergency backup purposes, in exchange for appropriate billing that recognizes the infrequent but critical instances that the client would be retrieving stored data. Greater latency of data retrieval for such service may be tolerated, because of the infrequent nature of such access.
  • a Primary Source Data Client may continue to use other storage not associated with the DFS without charge, e.g., local storage, remote storage (e.g., LAN-based storage, other cloud-based storage not associated with the DFS of the present invention, etc.), local backups, and so forth.
  • storage not associated with the DFS e.g., local storage, remote storage (e.g., LAN-based storage, other cloud-based storage not associated with the DFS of the present invention, etc.), local backups, and so forth.
  • the Data Utility Company may provide one or more temporary data storage units in server 106 in order to store incoming client data for storage and backups.
  • the Data Utility may then replicate, encrypt, and transmit the data to multiple data farms 104 .
  • additional individual data farms 104 - n may be added to the DFS, or some data farms 104 - n may become inactive due to aforementioned disruptions to the data farm 104 - n and/or its communication link to WAN 101 .
  • a new data farm 104 - n when added to the DFS, its addition to the DFS may be recorded by the storage network module 110 . Thereafter, data farm 104 - n and storage network module 110 may exchange periodic ping, heartbeat or monitoring signals such that storage network module 110 is aware that data farm 104 - n is operating normally.
  • storage network module 110 may infer that data farm 104 - n is offline and modify internal routing tables such that new data to be stored is not assigned to data farm 104 - n .
  • the responsible data farmer may be notified of the status of their data farm 104 - n and/or be penalized.
  • Data farm 104 - n may be periodically pinged thereafter to determine if data farm 104 - n has come back on-line. If so, and after waiting for an optional waiting period in case intermittent problems persist, the DFS may again utilize data farm 104 - n as usual.
  • the Primary Source Data Client may request that their data be rewritten in the DFS.
  • the rewritten data (or portions thereof) may be written to the same data farm(s) 104 - n or to different data farm(s) 104 - n .
  • operation of the DFS may also help guard against data becoming inaccessible when stored in an offline data farm 104 - n .
  • the data is rewritten to the same data farm 104 - n , then new data is not being written to the data farm 104 - n , potentially providing a cost savings to a user who is being billed based upon the amount of data transferred.
  • the data farmer may continue to be paid for the transfer and storage of the information. If storage exceeds limits specified in a lease agreement, the data farmer may be compensated based on new lease agreements with the Primary Source Data Client.
  • Billing agreements between a Primary Source Data Client and a data farmer may further depend upon levels of service and a spending budget available to the Primary Source Data Client. For example, the level of service could be based on which regions the Client's information will be stored in duplication. For example, storm or disaster prone areas would fall in a lower priced agreement with a higher risk factor. Locations with faster internet infrastructures and/or lower disaster rates would be a higher priced agreement. Billing may also depend upon the type of data farm used, latency of the storage used, and so forth. For example, usage of data farms housed in residences may be billed at a different rate than usage of data farms housed in commercial buildings. The sensitivity of the Client's information and frequency of retrieval may be used to determine the rate charged.
  • Further embodiments of operating the DFS may include paying a bonus to a data farmer who stores and is able to successfully retrieve a backup that is requested by a Primary Source Data Client. This will tend to encourage participation and maintenance (if any) by data farmers, e.g., by diligently tending to the electrical and data communication needs (if any) and security of their data farms.
  • Embodiments in accordance with the present invention may provide that the Data Utility will determine which data farmers from among potentially multiple farmers that are qualified under the Primary Source Data Client's request (e.g., geographic location, business/residential data farm, storage media latency, etc.) will provide the backup data.
  • the Data Utility may consider one or more factors such as the present storage and communication capacity of qualified data farmers, which in turn may depend upon other factors such as the current internet traffic in that region, locality and site bandwidth traffic, and so forth.
  • the Data Utility may choose one or more data farmers based on a preferred combination of such present factors.
  • Embodiments in accordance with the present invention may provide a choice of different service levels to data farmers, in order for the data farmer to select a level of service for providing data storage services. Differing levels of data storage services may affect decisions by data farmers regarding types of storage assets to deploy and a location on or within their property the data storage assets are placed.
  • some data farmers may allocate 80% of their data storage capacity to a first type of storage asset and/or a first location of the storage assets in exchange for a service providing a guaranteed rate of return, another 15% of their data storage capacity to a second type of storage asset and/or a second location of the storage assets in exchange for a service providing payment at a higher rate only for storage space that is actually utilized by a Primary Source Data Client, and the final 5% of their data storage capacity may be allocated to a third type of storage asset and/or a third location of the storage assets in exchange for a service providing emergency backups and over-limit data usages by Primary Source Data Clients. Such emergency backup and/or over-limit data usages are not as likely but will generate proportionally more revenue if they are utilized. This allocation may be changed periodically by the data farmer, subject to the capacity needs of the DFS and the Data Utility.
  • Embodiments of the present invention may provide disaster recovery services to user 102 - m at computing node 152 - m .
  • user 102 - m may notify the administrative module 112 that some or all of their data backed up in the distributed file system needs to be restored.
  • An exemplary cause may include if user 102 - m has suffered a failure of primary storage elements associated with computing node 152 - m .
  • a disaster recovery service in accordance with an embodiment of the present invention may include providing an ability to reassemble the data of user 102 - m from various portions of the client's data that are backed up within the DFS.
  • the restored data may be supplied to the client, or may be temporarily stored in another location (e.g., in a memory within server 106 ), or be made accessible to the client in its distributed state for the purpose of distributed computing provided by cloud computing services.
  • Embodiments of the present invention include a system having one or more processing units coupled to one or more memories.
  • the one or more memories may be configured to store software that, when executed by the one or more processing unit, allows practicing embodiments of the invention, at least by use of processes described herein, including at least in the Figures and related text.
  • the disclosed methods may be readily implemented in software, such as by using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms.
  • the disclosed system may be implemented partially or fully in hardware, such as by using standard logic circuits or VLSI design. Whether software or hardware may be used to implement the systems in accordance with various embodiments of the present invention may be dependent on various considerations, such as the speed or efficiency requirements of the system, the particular function, and the particular software or hardware systems being utilized.

Abstract

Building support with a concealed electronic component for a structure, including: a rigid support member; a mounting attachment affixed to the rigid support member, the mounting attachment adapted to support an electronic component; and a transceiver coupled to the electronic component, the transceiver adapted to support an external communication link. Other embodiments provide a backing material to support an electronic component concealed within a building structural element, wherein the building structural element comprises one or more rigid building support members, the backing material including: a substrate; a structure attachment along at least one surface of the substrate, the attachment adapted to attach the substrate to a rigid building support member; and one or more electronic component attachments disposed on a major surface of the rigid substrate.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This patent application claims the benefit of the U.S. patent application entitled “DATA FARMING AND SERVICE,” having Ser. No. 13/772,853, filed on Feb. 21, 2013, the entire content of which is hereby incorporated by reference in its entirety.
  • BACKGROUND
  • 1. Field of the Invention
  • Embodiments of the present invention generally relate to a concealed building support, and, in particular, to an apparatus, system and method for providing a concealed building support that is adapted to house or support an electronic component.
  • 2. Description of Related Art
  • It is well known that we are a data driven society. Over the past several years there has been a push to convert all of our society's global data, communications, media, etc into a digital format and store that information on physical media such as hard drives, CDs and DVDs. The amount of digital data that our society is creating is growing exponentially, and the corresponding need for data storage is growing exponentially. Everything from books, pictures, movies, television, personal files, business files, telephone conversations, and more, are being converted to a digital format and stored on physical media as digital data. Certain digital data may need to be archived for an extended period of time in order to satisfy recordkeeping laws, thereby further expanding the storage needs. Most new data is born into the digital world while all existing media is being converted. Digital data storage space has become both a resource and a commodity.
  • Digital data has traditionally been stored locally in a storage asset associated with a computer that generated the data, e.g., on the computer's hard drive, or stored onto magnetic, optical, and/or removable storage media such as CDs, DVDs, removable flash drives, floppy disks, and so forth. A single storage asset, or a small number of clustered storage assets, may consume a relatively modest amount of energy, and/or generate a modest amount of heat, and/or generate little attention or scrutiny from outsiders. Alternatively, the digital data may have traditionally been stored nearby, such as memory accessible to a server on a local area network (“LAN”). These storage solutions require that an end user or an administrator attend to the storage media by, e.g., replacing defective media such as a failed hard drive, securely storing removable media such as flash drives, CDs, DVDs, etc. when not in use, rebooting a client or server if either computer crashes or enters an unstable state, making regular backups to guard against data corruption or accidental erasure of data, and so forth.
  • Due to the inconvenience involved with maintaining local or nearby computer storage, and the resultant probabilities of data loss if the computers or storage media are not properly maintained, a trend has developed in recent years to migrate storage to “the Cloud,” which can then be remotely accessed from devices such as computers, laptops, tablets and phones. Data storage in the Cloud requires fast communication access to a remotely-located storage system. Communication access is typically through the Internet, using Ethernet and TCP/IP. Other protocols may be used depending upon the data, such as real-time transport control protocol (“RTCP”) as known in the art for streaming media.
  • Cloud-based storage shifts the burden of maintaining data storage assets to a central manager, e.g., a conventional data warehouse and warehouse operator. Cloud-based storage typically requires a relatively large number of storage assets. Economies of scale may be achieved for some aspects of operation, such as having dedicated technical support available in order to tend to hardware failures, enforce security or backup policies, and so forth.
  • However, a concentration of storage assets in a data warehouse may create problems that exceed a tolerable level unless mitigated. For example, an N-fold increase in the number of storage assets may by themselves cause an N-fold increase in power consumption and heat generation, which in turn requires higher-capacity climate control equipment and concomitant further increases in energy consumption for cooling. Modern data warehouses have become massive facilities that consume large amounts of power, large plots of lands, and require high-capacity communication trunks to support the data traffic.
  • Furthermore, the conventional data warehouse draws attention to itself due to its physical size, the value of the data stored within it, and the threat of business disruption if the data warehouse were to be attacked or otherwise suffer a failure. Conventional wisdom teaches away from a disfavored maxim known as “security through obscurity,” which holds that sufficient security of an asset may be achieved by attempting to hide the asset without the need for overwhelming security protection. Thus the data warehouse requires increased physical security in order to guard against criminals, terrorists and similar threats.
  • The required infrastructure of a conventional data warehouse, i.e., to supply the electrical energy, to supply cooling capacity, to supply communication network capacity, and to supply physical security, increases the cost and eco-footprint of operating a data warehouse and may not be appealing to eco-conscious consumers or consumers who seek a lower cost to store data in the Cloud.
  • Therefore, a need exists to provide a Cloud-based storage system that is less resource-intensive to operate than a traditional data warehouse, in order to provide a lower-cost and/or more eco-friendly storage system for customers, and ultimately improved customer satisfaction.
  • SUMMARY
  • Embodiments in accordance with the present invention avoid the drawbacks of the known art by providing a dispersed, distributed file system in order to host Cloud-based storage. Storage nodes, which may be individual hard drives or clusters of co-located hard drives, may be dispersed and located within buildings that are not ordinarily used for data warehouses, such as ordinary homes, office buildings, retail locations, and so forth.
  • Storage nodes may be located within unobtrusive but otherwise unused space of the buildings, such as (in the case of an ordinary home) structural space and/or open interior space that is enclosed by the structure. Open interior space may include attic space, basement space, and so forth. Structural space may include space within the structure itself, such as space within walls, space under floorboards, and so forth. Structural space is often closed off with limited physical access compared to open interior space. The dispersed, redundant, fault-tolerant and unobtrusive nature of the storage nodes reduces the need for electrical power, environmental control, communication speeds, and elaborate security measures.
  • Building owners are encouraged to host storage nodes in their properties and participate in ongoing operation of a storage network, by receiving fees or other monetary incentives (e.g., royalty checks, discount coupons from local merchants, etc.), or non-monetary incentives (e.g., preferred memberships in a business such as a local gym, charitable donations in their name, etc.).
  • Embodiments in accordance with the present invention may be marketed to data farmers and Primary Source Data Clients as a “green” (i.e., eco-friendly) product. Compared to large data centers, embodiments use substantially less electricity. Conventional data storage centers typically use hard drive storage, which use spinning motorized drives that are constantly powered. In contrast, embodiments may utilize solid state technologies for reduced power consumption and reduced heat production required for storage. With the use of more efficient technology and the dispersal of individual storage assets, large data centers will not be required. The elimination of these centers would in turn eliminate the need for large heating, ventilation and air conditioning (“HVAC”) equipment and their resultant large power demands.
  • Embodiments in accordance with the present invention may provide a system and method for distributed file storage, the system including: a plurality of data farms, each data farm including: a data storage module; a local control module comprising a data protection module; and a communication interface between said data farm and a wide-area network; an interface to one or more remote data applications; and an administrative module configured to record a quantity of data received or transmitted by the communication interface of the data farm. The method may include: providing a plurality of data farms; accepting data from a remote data application; selecting a data farm from among the plurality of data farms in which to store the data; and storing the data in the selected data farm.
  • Embodiments in accordance with the present invention may provide a building support with a concealed electronic component for a structure, including: a rigid support member; a mounting attachment affixed to the rigid support member, the mounting attachment adapted to support an electronic component; and a transceiver coupled to the electronic component, the transceiver adapted to support an external communication link.
  • Embodiments in accordance with the present invention may provide a backing material to support an electronic component concealed within a building structural element, wherein the building structural element comprises one or more rigid building support members, the backing material including: a substrate; a structure attachment along at least one surface of the substrate, the attachment adapted to attach the substrate to a rigid building support member; and one or more electronic component attachments disposed on a major surface of the rigid substrate.
  • The preceding is a simplified summary of embodiments of the disclosure to provide an understanding of some aspects of the disclosure. This summary is neither an extensive nor exhaustive overview of the disclosure and its various embodiments. It is intended neither to identify key or critical elements of the disclosure nor to delineate the scope of the disclosure but to present selected concepts of the disclosure in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the disclosure are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and still further features and advantages of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings wherein like reference numerals in the various figures are utilized to designate like components, and wherein:
  • FIG. 1 is a block diagram depicting a distributed file system in accordance with an embodiment of the present invention;
  • FIG. 2 illustrates at a high level of abstraction a server of a distributed file system, in accordance with an embodiment of the present invention;
  • FIG. 3 illustrates an exemplary structural storage location, in accordance with an embodiment of the present invention;
  • FIG. 4 illustrates a housing for a storage apparatus, in accordance with an embodiment of the present invention; and
  • FIG. 5 illustrates a cross sectional view of a plurality of housings deployed at a data farm, in accordance with an embodiment of the present invention.
  • The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
  • DETAILED DESCRIPTION
  • The disclosure will be illustrated below in conjunction with an exemplary communication system. Although well suited for use with, e.g., a system using a server(s) and/or database(s), the disclosure is not limited to use with any particular type of communication system or configuration of system elements. Those skilled in the art will recognize that the disclosed techniques may be used in any communication application in which it is desirable to utilize a low-cost and low-overhead distributed file system.
  • The exemplary systems and methods of this disclosure will also be described in relation to software, modules, and associated hardware. However, to avoid unnecessarily obscuring the present disclosure, the following description omits well-known structures, components and devices that may be shown in block diagram form, are well known, or are otherwise summarized.
  • In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of embodiments or other examples described herein. In some instances, well-known methods, procedures, components and circuits have not been described in detail, so as to not obscure the following description. Further, the examples disclosed are for exemplary purposes only and other examples may be employed in lieu of, or in combination with, the examples disclosed. It should also be noted the examples presented herein should not be construed as limiting of the scope of embodiments of the present invention, as other equally effective examples are possible and likely.
  • As used herein, the term “module” refers generally to a logical sequence or association of steps, processes or components. For example, a software module may comprise a set of associated routines or subroutines within a computer program. Alternatively, a module may comprise a substantially self-contained hardware device. A module may also comprise a logical set of processes irrespective of any software or hardware implementation.
  • The term “computer-readable medium” as used herein refers to any tangible storage and/or transmission medium that participates in storing and/or providing instructions to a processor for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, NVRAM, or magnetic or optical disks. Volatile media includes dynamic memory, such as main memory. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, magneto-optical medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, solid state medium like a memory card, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. Computer-readable medium may also include volatile or non-volatile emerging storage media such as data encoded in chemical or organic-chemical cells, and holographic cells. A digital file attachment to e-mail or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium. When the computer-readable media is configured as a database, it is to be understood that the database may be any type of database, such as relational, hierarchical, object-oriented, and/or the like. Accordingly, the disclosure is considered to include a tangible storage medium or distribution medium and prior art-recognized equivalents and successor media, in which the software implementations of the present disclosure are stored.
  • Embodiments in accordance with the present invention provide a system for data storage and backup that will utilize existing underutilized or “wasted” spaces, voids, etc. inside industrial, commercial and residential buildings in order to generate a new source of data storage space and create a symbiotic data storage relationship between big corporations, small business, homeowners, and data servicers.
  • The data farms hosts may receive income for storing the data within their structures. They may provide a dedicated high-speed internet connection for the storage system, thus allowing fast access and download of backup information as well as the retrieval of documentation at any time.
  • Embodiments in accordance with the present invention will provide to data generators and data users (e.g., corporate data users) an alternate physical location to store their digital data and backups. These new digital storage locales will provide to their users an offer of multiple backups around the world. This will further protect corporations' backups from natural disasters or attacks, both physical and cyber.
  • The data storage devices will be installed in “empty” spaces or “voids” in industrial, residential and commercial structures. Such hidden, discreet or unobtrusive locations may include, but are not limited to, cavities inside the wall space, attic space, heating ventilation and air conditioning (“HVAC”) ducts, conduit, etc. Typically the data storage devices may include solid state storage units within a protective enclosure, which are then installed in discreet locations. If the data storage device is installed within an exterior wall or other boundary with an area that is not temperature-controlled, the device sheathing or housing will tend to reduce such variations. The storage devices may also be incorporated into a number of construction materials to utilize the mass of the structure for data storage. For example, along the side of a steel I-beam and/or steel stud, along the surface of metal paneling, or voids that may be pre-formed into concrete slabs, planks, studs, etc., and so forth, i.e., substantially any place that a void exists (either natural or planned). Existing structures could be retrofitted and new construction could use the building materials prefabricated with data storage devices. The data storage units may be interconnected and gridded for optimal flow of data and storage throughout the structure while consuming less energy than traditional storage facilities.
  • The data sent and stored to these data storage devices would be managed and maintained by a third party data servicing company. The third party providers would coordinate backups between corporations seeking backup security and the “housers” of the storage units (i.e., the data farmer). Charges may be based on the size of the system and the frequency in which information is retrieved for restore purposes. Charges may also be based on how much data is transferred to or from the “housers” on a daily basis.
  • Data security is an important consideration related to the transfer and storage of the data. The data may be encrypted by a third-party provided so if an unauthorized entity attempts to access data stored in a data farm, the unauthorized entity would not be able to decipher the data. Access by proxy may be allowed, wherein a user or process may access the data in a data farm on behalf of a end user, decrypt or otherwise process the retrieved data, then send the processed data to the end user. Improved security is provided at data farms by limiting knowledge at data farms of their existence and operation only to persons having a need-to-know at the data farm. A data farmer will not know whose information is being routed through their storage devices, nor will the end-client know precisely at what locations or which data farms their information is stored. This provides the invention's security through obscurity.
  • FIG. 1 illustrates at a high level of abstraction a system 100 in accordance with an embodiment of the invention. System 100 includes a plurality of computing nodes 152-1 . . . 152-M hosting one or more user applications 102-1 . . . 102-M, a plurality of data farms 104-1 . . . 104-N, and a server 106, interconnected as shown through a wide area network (“WAN”) 101 such as the Internet. An individual but unspecific user application may be referred to herein as user application 102-m or as user application 102. An individual but unspecific data farm may be referred to herein as data farm 104-m or as data farm 104. An individual but unspecific computing node may be referred to herein as computing node 152-m or as computing node 152. Computing note 152 may include a server coupled to a memory and associated internal and/or external communication interfaces in order to support user application 102.
  • Server 106 may be a software-controlled system including a processing unit (CPU), microprocessor, or other type of digital data processor executing software or an Application-Specific Integrated Circuit (ASIC) as well as various portions or combinations of such elements. Server 106 may further include a storage network module 110 and/or an administrative module 112.
  • FIG. 2 depicts a distributed file system (“DFS”) 200 according to an embodiment of the present disclosure, with an emphasis on depicting exemplary components of server 106 at a lower level of abstraction. DFS 200 may include a server 106 that is in communication, via a (typically untrusted or unsecure or public) WAN 101, with one or more external computing nodes 152. The external computing nodes 152 are not under the direct control of the enterprise administering the server 106 and/or have a decreased level of trust with the server 106 as compared with communication devices 236-1 . . . 236-K that are within the server 106. Communication devices 236-1 . . . 236-K may include a local terminal or similar interface to provide direct, local control of server 106. Exemplary types of external computing nodes 152 include, without limitation, laptops, Personal Computers (PCs), Personal Digital Assistants (PDAs), gateways to other LANs or WANs, and the like.
  • The server 106 may include a boundary device 216 including a server table 220, one or more internal communication devices 236-1 . . . 236-K, one or more application servers 244 which may be capable of providing one application 248 or a set of different applications 252, a number of other servers 256 to provide other functions of server 106, and an enterprise database 260, all of which are interconnected by a (trusted or secure or private) Local Area Network (LAN) 264. Some or all of the functions depicted in FIG. 2 may be co-hosted and/or co-resident on a single server. The depiction of components in FIG. 2 is generally intended to be a logical depiction of the components of the system 200.
  • The LAN 264 can be secured from intrusion by untrusted parties by a gateway and/or firewall located between the LAN 264 and WAN 101. In some embodiments the boundary device 216 may include the functionality of the gateway and/or firewall. In some embodiments, a separate gateway or firewall may be provided between the boundary device 216 and WAN 101.
  • In some embodiments, network boundary device 216 is responsible for initially routing communications within the server 106 for servicing a particular user involved in accessing the DFS. Communications server 244 with enterprise database 260 may perform the functions of storage network module 110.
  • Although only two application servers 244 are depicted, one skilled in the art will appreciate the one, two, three, or more applications servers 244 can be provided and each server may be configured to provide one or more applications. The applications provided by a particular application server 244 may vary depending upon the capabilities of the server 244 and in the event that a particular application server 244 comprises a set of applications 252, one, some, or all of the applications in that set of applications 252 may be included in a particular application sequence. Application server 244 may be used to perform the functions of administration module 112.
  • Referring again to FIG. 1, each data farm 104-n may include a storage apparatus 126 and a local control module 124. Storage apparatus 126 may include substantially any type of computer-readable medium. Local control module 124 provides a communication interface between WAN 101 and storage apparatus 126. Local control module 124 may further provide firewall, gateway, routing functions, administrative and local processing control of its associated data farm 104. Local control module 124 acts as a server for its associated data farm 104.
  • At least some of data farms 104-n may differ from one another based upon factors such as the type of storage technology used for storage apparatus 126, its associated latency, and the speed and/or latency of its associated communication link to WAN 101. Similarly, at least some of computing nodes 152-m may differ from one another at least in computing throughput and in the speed and/or latency of its associated communication link to WAN 101. Therefore, system 100 may be able to accommodate a heterogeneous and geographically diverse network environment, unlike some systems of the known art in which each of data farm 104-n may represent, e.g., a rack of storage units in a conventional data farm. System 100 may be useful for storage applications in which relatively higher latencies and tolerances may be tolerable, such as for a storage application that is used to infrequently access data stored for archival backup purposes.
  • In some embodiments, a portion of memory associated with computing node 152 may be usable as part of a data farm 104 for a different computing node 152.
  • The plurality of data farms 104 together provide a distributed file system used by one or more of user applications 102. User applications 102 write data to the DFS and/or read data from the DFS, and are thus data users. The DFS optionally operates under the control of server 106, and in particular under the control of storage network module 110. The DFS is designed to store very large data sets reliably, and to stream those data sets to user applications 102. A large DFS may include thousands of data farms 104. By distributing storage and local processing control across many servers, the DFS may grow with demand while remaining economical at substantially every size.
  • One protocol for providing a distributed file system is Hadoop, which provides a framework for an analysis and transformation of very large data sets using a MapReduce paradigm. Under Hadoop, data and computation may be partitioned across thousands of data farms 104. A Hadoop-based DFS may scale storage capacity and I/O bandwidth by simply adding additional data farms 104.
  • Server 106 may implement Hadoop NameNode functions, and data farms 104 may implement DataNode functions. Under Hadoop, the NameNode is important to the operation of the DFS. The NameNode keeps a directory tree of all files in the file system, and tracks where across the cluster the file data is kept. The NameNode does not store the data of these files itself.
  • User applications 102 communicate with the NameNode whenever the user application 102 attempts to locate a file in the DFS, or when the user application 102 attempts to add/copy/move/delete a file in the DFS. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data is stored.
  • The NameNode should be a highly reliable computing element, since if the NameNode fails then the DFS will go offline. An optional secondary NameNode may be used to provide protection if a primary NameNode fails. The NameNode should be hosted on a server having a large amount of RAM space.
  • Under Hadoop, the DFS stores data in DataNodes. The DFS has numerous DataNodes, and data may be replicated across more than one DataNode. On startup, a DataNode connects to the NameNode and is then ready to support DFS operations.
  • User applications 102 may communicate directly to a DataNode after the NameNode has provided the location of the data. DataNodes may communicate with each other, such as if they are replicating data. There is usually no need to use RAID storage for DataNode data, because data is designed to be replicated across multiple data farms 104, rather than multiple disks on the same data farm 104.
  • The distributed file system may be based on other protocols known in the art, such as BitTorrent, PVFS, or Lustre. For example, Lustre is known as a storage-architecture for data clusters. The central component is the Lustre file system, a shared file system for clusters. The Lustre file system is currently available for Linux and provides a POSIX-compliant UNIX file system interface. Storage network module 110 may not be needed if a peer-to-peer protocol such as BitTorrent is used.
  • Each farm of data farms 104 may have characteristics such as capacity, underlying storage technology, storage latency, communication latency, local controller capabilities, etc., that are independent of similar characteristics at other farms of data farms 104. These characteristics may vary significantly from one data farm 104 to another. In this way, the DFS is operable with a heterogeneous collection of data farms.
  • Each of data farms 104 may be located in widely dispersed locations, such as in discreet and unobtrusive locations in structures located substantially anywhere nationwide or worldwide, so long as it is reachable through WAN 101. A structure such as a house typically has hidden spaces that are physically large enough to install a computer-readable medium. Such computer-readable medium could be installed during construction or retrofitted later, for use as a data farm 104-m. For example, FIG. 3 illustrates an internal view of a wall 300 without attached drywall. The wall typically includes a horizontal top plate, a horizontal bottom plate, and a plurality of vertical wall studs. The wall usually includes wiring as part of the electrical system, and sometimes also includes separate wiring for an Ethernet network. Access to the electrical system external to the wall may be provided by an electric plug socket.
  • Other discreet and unobtrusive locations within a typical house include within or between floor joists, basement areas, attic areas, under a stairway, hollow core doors, etc. Within an office building, other discreet and unobtrusive locations may be available, such as under a raised floorboard, above a false ceiling, within modular walls, and so forth.
  • The discreet and unobtrusive locations should be physically large enough to accommodate a storage apparatus 126 (or a cluster of storage apparatus 126) and associated local control 124, typically within a support and protective enclosure. For example, a low-latency storage apparatus 126 may include a portable disk drive or a solid state drive, which are commonly available in sizes such as approximately 3″×4″×1″. A high-latency storage apparatus 126 may include a USB flash drive, which are commonly available in sizes such as approximately 2″×0.75″×0.4″. A tablet-based local controller 124 may be approximately 7″×4″×1″. In some embodiments the local controller 124 may include communication interface 128. Deployment of systems in accordance with an embodiment of the present invention will be able to adapt to new structure materials and building methods, e.g., writing of data to the surface of wall panels.
  • Other locations may be used as a data farm 104, so long as there is access to electrical power and communication services as may be required. For example, data farm 104 may be placed outside such as on a rooftop, atop a telephone pole, or incorporated into renewable energy systems (e.g. solar panels, wind turbines, etc.). Such locations may also rely upon renewable power (e.g., solar panels, wind turbine, etc.) with battery backup for electrical power, and WiFi signals (either public or as provided by a data farmer) for a communication link. Space in other structures such as garages and sheds may also be used. Structural materials (e.g., studs) may be pre-configured to include storage modules. Such pre-configured structural materials may be suited if retrofitting a structure to include embodiments in accordance with the present invention, with minimal impact to other aspects of usage of the structure. Outdoor locations and other uncontrolled environments may require local controllers 124 and storage apparatus 126 that are ruggedized for the expected temperature swings and protected against wind, rain, and other elements.
  • A higher-latency storage apparatus 126 may include one or more USB flash drives. Flash drives are commonly available in capacities ranging from 2 GB to 64 GB and higher. For example, a controller such as an Android tablet or similar compact computing device may include a USB port. The USB port may be further fanned-out by use of one or more USB hubs. Flash drives may then be connected to one or more of the USB ports. Usage of flash drives for the storage apparatus has an added benefit of typically consuming less power than usage of a portable disk drive.
  • In a further embodiment, individual storage units having an appropriate USB ports may be daisy-chained together, in order to provide a composite storage apparatus 126 having more data storage capacity than an individual storage unit.
  • In a further embodiment, a data farm 104 may include an expandable data bus. Individual storage units may be added to data farm 104 by connecting the additional storage unit to the data bus. Expandability of an individual data farm 104 may also be achieved by usage of wireless communication methods, which may be inherently expandable. Wireless methods may include WiFi (IEEE 802.11) and short-range communication such as Bluetooth (IEEE 802.15.1), Zigbee (IEEE 802.15.4), etc.
  • All storage apparatus 126 and associated local control units 124 at data farm 104 should be substantially concealed. A person should not be aware that they were within a data farm 104. Data farm 104 may be used as primary data storage or as backup data storage for remote clients. Higher latency storage devices may be more useful for backup storage applications.
  • Referring again to FIG. 3, a storage enclosure 302 may be located within the wall 300. Storage enclosure includes the components of data farm 104. Physical characteristics of storage enclosure 302 may be adapted to the specific installation location. For example, the size, shape, capacity, etc. of storage enclosure 302, and the mounting or support it provides to components of data farm 104, may be adapted to the available space. Storage enclosure 302 may not be fully enclosed, e.g., a top direction may be left open for ease of maintenance and because ordinarily for mounting locations within a wall there is little risk of unwanted physical intrusion from the top. Enclosure requirements (e.g., the degree of enclosure, which directions may be relatively exposed, the strength of the enclosure, etc.) may vary from one installation site to another, based on factors such as the installation location and the method of securing the enclosure and/or data farm.
  • Once drywall is attached to the wall studs, the storage enclosure 302 will be hidden from view. Electrical power may be supplied to storage enclosure 302 by tapping into electrical wiring that is already present within the wall. Alternatively, electrical power may be supplied by discreetly routing power supply wires through walls, floors, etc. to the location of storage enclosure 302. Data farms 104 having sufficiently low-power electrical power consumption may be powered by wireless methods and systems such as inductive power coupling. An inductive power system includes a transmitter coil and a receiver coil. Both coils form a system of magnetically coupled inductors. An alternating current in the transmitter coil generates a magnetic field which induces a voltage in the receiver coil. This voltage can be used to power a sufficiently low-power data farm 104. The transmitting coil may be located on the room-facing side of a drywall wall, and the receiving coil on the interior-facing side of the drywall wall. One portion of a data farm (e.g., a controller or hub) may be configured to receive power from an outside source, and other components (e.g., USB devices) may be configured to receive any necessary power from the controller or hub through a USB link (or other communication link).
  • Similarly, data farm 104 should include a communication interface 128 to WAN 101. The communication link may include one or more embodiments such as: an Ethernet interface to a broadband access point (e.g., a hub and/or router); a wireless interface (e.g., WiFi) to a host-supplied broadband access point (e.g., a wireless router); a WiFi interface to a public WiFi hotspot; a 4G wireless cell phone interface to a cell phone communication tower; USB link; a fiber optic link; a wireless (free-space) optic link; laser, and so forth. Power may be conserved by placing the data farm 104 in a receive-only or listen-only mode, until the data farm 104 needs to respond (e.g., providing data, responding to a ping, etc.).
  • FIG. 4 illustrates a housing 400 for a storage apparatus 126 in accordance with an embodiment of the present invention. Although housing 400 is illustrated as a parallelepiped with six surfaces (including two major surfaces and four edge surfaces), the shape of housing 400 is not constrained to such a shape. Alternative shapes having other numbers of surfaces may be used, such as more surfaces (e.g., a geodesic shape, solid hexagon, etc.) or fewer surfaces (e.g., sphere, cone, pyramid, etc.). Furthermore, housing 400 need not fully enclose a space. For example, housing 400 may include one open side. Alternatively, housing 400 may include only one surface, configured to mount storage apparatus to one side of the mounting surface, and the mounting surface being attached on its other side to a rigid base.
  • Housing 400 may include one or more transceivers 402 configured to communicate in at least a 90 degree angle around housing 400. Preferably, multiple transceivers 402 are used to provide 360-degree communication in a plane surrounding housing 402. Alternatively, multiple transceivers 402 may be used to provide spherical or hemispherical coverage around housing 402. FIG. 4 illustrates transceivers 402 located on two of the three visible planar surfaces of housing 400. Transceivers 402 may be located at substantially any exterior points of housing 400, including a surface, and edge, and/or a corner. Alternatively, transceivers 402 may be mounted internally to housing 402, either behind a surface of housing 400 that is transparent to the wireless signal (e.g., a glass window for optical signals), or configured to transmit through an open side of housing 400.
  • Transceivers 402 may be used to communicate wirelessly with storage apparatus 126 within housing 400. Communication may including: sending and/or receiving data to be stored in, or retrieved from, storage apparatus 126; controlling storage apparatus 126; and/or transmitting status of storage apparatus 126. Transceivers 402 may use substantially any wireless communication technology, such as RF (e.g., WiFi, ZigBee, Bluetooth, etc.), infrared, optical (e.g., LED or laser); ultrasound, etc.
  • Within housing 400 there may be included a securing apparatus to securely attach storage apparatus 126 to housing 400. Securing apparatus may include screws, bolts, adhesive, tie-straps, and so forth. Housing 400 may further include a controller configured to read/write data to/from the storage apparatus 126. The controller may be further configured to provide a communication interface via transceivers 402. The controller may be further configured to provide a status or state of health via transceivers 402, either periodically or in response to a query.
  • FIG. 5 illustrates a cross-sectional view 500 of a plurality of housing 400 installed within a wall, in accordance with an embodiment of the present invention. View 500 illustrates three wall studs 502, but persons of skill in the art will understand how to extend view 500 to more than three studs. Between a pair of adjacent studs 502 may be deployed a plurality of housings 400. An individual housing 400 may be in communicative contact with at least one neighboring housing 400. A backing material 504 may be provided, upon which at least some of the plurality of housings 400 may be mounted. Backing material 504 may be a mesh, webbing, solid board, combination thereof, and so forth, that is sufficiently strong in order to maintain the positions of housings 400 relative to one another, so that communications with a neighboring housing 400 via transceivers 402 may be supported. Backing material 504 allows for a plurality of housings 400 to be installed in advance on backing material 504 (e.g., at a factory), then attaching the backing material populated with housings 400 to studs 502. Backing material 504 may be large enough to be secured on opposite sides to studs 502 that are separated by a standard distance as known in the building construction arts.
  • Communication between different sides of a stud 502 may be facilitated by an aperture 508 within stud 502. Aperture 508 may allow for housings 400 on opposite sides of stud 502 to communicate with one another. For example, housing 510 and housing 512, on opposite sides of the center stud 502 of FIG. 5, may be in communicative contact with each another. The communicative contact may be by wired or wireless methods.
  • In another embodiment in accordance with the present invention, a backing material 506 may be provided that is attached on at least one side 516 to a secure object such as stud 502, and having at least one other side 518 that is configured to be expandable. Although side 518 is illustrated opposite of side 516, side may be positioned substantially anywhere along a perimeter or major surface of backing 506, or portion thereof, so long as the attachment of side 516 to a secure object is not impaired. Side 518 may include a locking apparatus 514 (e.g., interlocking protrusions) that are configured to interlock with matching voids of an expansion board (not shown in FIG. 5), thereby providing expandability. The expansion board may be substantially similar to backing 506.
  • The plurality of housings 400 may be in communicative contact with a controller 520, which in turn is communicatively connected to WAN 101. The plurality of housings 400 may communicate by use of protocols known in the art of data networking, such as a flood protocol.
  • In some embodiments, housings 400 may be able to report on a state of health or state of failure of storage apparatus 126 within the respective housing 400, along with an identifier of the housing 400 reporting the health or failure. In such embodiments, at least in part because of a relatively fixed spatial relationship provided by attaching housings 400 to backing material 504 or 506, a failure map may be generated and made available to maintenance personnel. The failure map may provide a graphical depiction of a specific housing 400 that is reporting a failure (or lack of reporting of good health). The failure map may facilitate repairs by identifying failed storage apparatus 126 for quick replacement.
  • Conventional data warehouses are housed within a trusted data and computing environment, such that strong data security measures against malicious attack is unnecessary for communications within the environment. In contrast, embodiments in accordance with the present invention include data farms located in widely dispersed locations, which are typically interconnected through an untrusted WAN 101 such as the Internet. Therefore, each dispersed data farm 104 should include a data protection module such as a firewall, anti-virus processes, and so forth. Data protection modules may be implemented in local controller 124 and/or communication interface 128. Each data farm 104 includes a trusted environment behind its respective data protection module but, as among separate and different data farms 104, the separate data farms 104 are in an untrusted data relationship.
  • In some embodiments in accordance with the present invention, the plurality of housings 400 may be mounted on a visible surface, rather than inside a wall. For example, a plurality of housings 400 may be attached to a visible surface of a wall in order to provide an artistic display. The plurality of housings 400 may be secured to the wall by conventional apparatus such as screws, bolt, clamps, welds, adhesive, Velcro, and so forth.
  • For distributed file systems that employ a central administrative node (e.g., a DFS based on Hadoop or similar), storage network module 110 may attend to control aspects of operating the DFS, and administrative module 112 may attend to billing and credit aspects of operating the DFS.
  • For distributed file systems that do not employ a central administrative node (e.g., a DFS based on BitTorrent or similar), individual local controllers 124-n in associated data farms 104-n, in cooperation with controllers and storage in computing nodes 152-m, may attend to control aspects of operating the DFS, and to monitoring of data bandwidth usage through the associated node 152-m or data farm 104-n for billing and administrative purposes. Usage data so collected may be reported to administrative module 112, which may then attend to billing and credit (i.e., compensation) aspects of operating the DFS.
  • Communication service and/or electrical power to data farm 104 may be disrupted at certain times. For example, downed trees may cause disruption to electrical or Internet connectivity, or a homeowner may perform renovation work that unwittingly affects the data farm hardware, or a homeowner may change broadband communication service providers, or may decide to let such service lapse (such as if moving), or the service may be disconnected, and so forth. Many such scenarios are possible. Consequently, system operation of the DFS should be resilient to failure or disconnection of individual data farms 104 from the network. Techniques to ensure resiliency are known in the art and include data replication, data striping, RAID storage, error correction codes, etc.
  • In one embodiment, system resiliency may be achieved by replicating multiple copies of data throughout the DFS, such that each data item is stored on two or more data farms 104. A system controller such as storage network module 110 may monitor a state of health of one or more data farms. Monitoring may be useful to determine utilization of the data farm, whether the data farm is online or offline, error conditions, and so forth. Monitoring a data farm may include periodically ping each data farm 104 to determine if it is still functional. Alternatively, each data farm 104 may be configured to provide a periodic heartbeat signal to the system controller. The heartbeat may include status information. The system controller may keep track of whether each data farm 104 is online or offline. If an individual data farm becomes offline, the system controller may then replicate the data items that had been stored on the offline data farm 104, by contacting the functioning data farms 104 that are storing copies of the data items. Copies of the data items may they be stored on one or more additional data farms 104, either by first sending the data items to system controlled 110 for redistribution, or by sending the data items directly to the selected data farms 104.
  • Embodiments in accordance with the present invention include a method of operating a DFS such that property owners are compensated for housing a data farm 104, and users of computing nodes 152-m obtain the benefit of a secure and resilient DFS at a lower overall cost than from a traditional highly secure data farm.
  • A method of operating the DFS may involve actions by entities such as: a user of a computing node 152-m (referred to herein as a “Primary Source Data Client”); a property owner associated with data farm 104-n (referred to herein as a “data farmer”); and a network operator associated with administration module 112 (referred to herein as a “data utility” or “data utility company”).
  • A Primary Source Data Client may be billed on the amount of system resources used, e.g., on the number of megabytes of data written to and/or read from the DFS, or based upon an amount of memory space used by the client, and so forth. This pricing model may be useful for clients that do not anticipate storing much data in the DFS. Data utility company accounting may charge for rewrite processes only for service and data transfer initiated by the client, and not for transfers caused by internal operation of the DFS (such as adjusting location of data storage as data farms come online and go offline).
  • Alternatively, a Primary Source Data Client may be billed on a flat fee schedule (e.g., a monthly fee), or a hybrid billing model (e.g., a flat fee up to a predetermined limit, with a per-megabyte fee above the limit). Billing may also be divided into separate fees for transmission and for storage. Client will be allowed a certain amount of monthly data transfer (writes and rewrites) along with a lease of specific amounts of data storage. Different tiered memberships may be available to meet the needs of subscription-based clients. Tiers may be structured based on factors such as storage capacity used, bytes of data transfer used, speed of data access (latency or communication speed), the amount of times backups are replicated, to what regions replications are sent to, and so forth. Client-initiated rewrites or stored data would only use the allotted amount of data transfer limits associated with the client's specific subscription tier. Exceeding these limits would result charges based on overage rates at premium costs.
  • A Primary Source Data Client may also be billed based on any number of factors or optional value-added services, such as the degree of replication or redundancy, regionalization (i.e., dispersal) of their data, strength of encryption, etc. For example, a Primary Source Data Client preferring a higher degree of data security may choose to have their data replicated on a greater number of data farms 104, with a concomitant greater billing.
  • Alternatively, a Primary Source Data Client may prefer to use the DFS only for emergency backup purposes, in exchange for appropriate billing that recognizes the infrequent but critical instances that the client would be retrieving stored data. Greater latency of data retrieval for such service may be tolerated, because of the infrequent nature of such access.
  • A Primary Source Data Client may continue to use other storage not associated with the DFS without charge, e.g., local storage, remote storage (e.g., LAN-based storage, other cloud-based storage not associated with the DFS of the present invention, etc.), local backups, and so forth.
  • In some embodiments in accordance with the present invention, the Data Utility Company may provide one or more temporary data storage units in server 106 in order to store incoming client data for storage and backups. The Data Utility may then replicate, encrypt, and transmit the data to multiple data farms 104.
  • Over time, additional individual data farms 104-n may be added to the DFS, or some data farms 104-n may become inactive due to aforementioned disruptions to the data farm 104-n and/or its communication link to WAN 101. In some embodiments in accordance with the present invention, when a new data farm 104-n is added to the DFS, its addition to the DFS may be recorded by the storage network module 110. Thereafter, data farm 104-n and storage network module 110 may exchange periodic ping, heartbeat or monitoring signals such that storage network module 110 is aware that data farm 104-n is operating normally. If storage network module 110 fails to receive an expected ping response, heartbeat message or monitoring message from data farm 104-n, storage network module 110 may infer that data farm 104-n is offline and modify internal routing tables such that new data to be stored is not assigned to data farm 104-n. The responsible data farmer may be notified of the status of their data farm 104-n and/or be penalized.
  • Data farm 104-n may be periodically pinged thereafter to determine if data farm 104-n has come back on-line. If so, and after waiting for an optional waiting period in case intermittent problems persist, the DFS may again utilize data farm 104-n as usual.
  • In some embodiments in accordance with the present invention, if a Primary Source Data Client has not accessed some of their data for more than a predetermined period of time, the Primary Source Data Client may request that their data be rewritten in the DFS. The rewritten data (or portions thereof) may be written to the same data farm(s) 104-n or to different data farm(s) 104-n. By this method, operation of the DFS may also help guard against data becoming inaccessible when stored in an offline data farm 104-n. If the data is rewritten to the same data farm 104-n, then new data is not being written to the data farm 104-n, potentially providing a cost savings to a user who is being billed based upon the amount of data transferred. The data farmer may continue to be paid for the transfer and storage of the information. If storage exceeds limits specified in a lease agreement, the data farmer may be compensated based on new lease agreements with the Primary Source Data Client.
  • Billing agreements between a Primary Source Data Client and a data farmer may further depend upon levels of service and a spending budget available to the Primary Source Data Client. For example, the level of service could be based on which regions the Client's information will be stored in duplication. For example, storm or disaster prone areas would fall in a lower priced agreement with a higher risk factor. Locations with faster internet infrastructures and/or lower disaster rates would be a higher priced agreement. Billing may also depend upon the type of data farm used, latency of the storage used, and so forth. For example, usage of data farms housed in residences may be billed at a different rate than usage of data farms housed in commercial buildings. The sensitivity of the Client's information and frequency of retrieval may be used to determine the rate charged.
  • Further embodiments of operating the DFS may include paying a bonus to a data farmer who stores and is able to successfully retrieve a backup that is requested by a Primary Source Data Client. This will tend to encourage participation and maintenance (if any) by data farmers, e.g., by diligently tending to the electrical and data communication needs (if any) and security of their data farms.
  • Embodiments in accordance with the present invention may provide that the Data Utility will determine which data farmers from among potentially multiple farmers that are qualified under the Primary Source Data Client's request (e.g., geographic location, business/residential data farm, storage media latency, etc.) will provide the backup data. The Data Utility may consider one or more factors such as the present storage and communication capacity of qualified data farmers, which in turn may depend upon other factors such as the current internet traffic in that region, locality and site bandwidth traffic, and so forth. The Data Utility may choose one or more data farmers based on a preferred combination of such present factors.
  • Embodiments in accordance with the present invention may provide a choice of different service levels to data farmers, in order for the data farmer to select a level of service for providing data storage services. Differing levels of data storage services may affect decisions by data farmers regarding types of storage assets to deploy and a location on or within their property the data storage assets are placed. For example, some data farmers may allocate 80% of their data storage capacity to a first type of storage asset and/or a first location of the storage assets in exchange for a service providing a guaranteed rate of return, another 15% of their data storage capacity to a second type of storage asset and/or a second location of the storage assets in exchange for a service providing payment at a higher rate only for storage space that is actually utilized by a Primary Source Data Client, and the final 5% of their data storage capacity may be allocated to a third type of storage asset and/or a third location of the storage assets in exchange for a service providing emergency backups and over-limit data usages by Primary Source Data Clients. Such emergency backup and/or over-limit data usages are not as likely but will generate proportionally more revenue if they are utilized. This allocation may be changed periodically by the data farmer, subject to the capacity needs of the DFS and the Data Utility.
  • Embodiments of the present invention may provide disaster recovery services to user 102-m at computing node 152-m. For example, user 102-m may notify the administrative module 112 that some or all of their data backed up in the distributed file system needs to be restored. An exemplary cause may include if user 102-m has suffered a failure of primary storage elements associated with computing node 152-m. In this situation, a disaster recovery service in accordance with an embodiment of the present invention may include providing an ability to reassemble the data of user 102-m from various portions of the client's data that are backed up within the DFS. The restored data may be supplied to the client, or may be temporarily stored in another location (e.g., in a memory within server 106), or be made accessible to the client in its distributed state for the purpose of distributed computing provided by cloud computing services.
  • Embodiments of the present invention include a system having one or more processing units coupled to one or more memories. The one or more memories may be configured to store software that, when executed by the one or more processing unit, allows practicing embodiments of the invention, at least by use of processes described herein, including at least in the Figures and related text.
  • The disclosed methods may be readily implemented in software, such as by using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware, such as by using standard logic circuits or VLSI design. Whether software or hardware may be used to implement the systems in accordance with various embodiments of the present invention may be dependent on various considerations, such as the speed or efficiency requirements of the system, the particular function, and the particular software or hardware systems being utilized.
  • While the foregoing is directed to embodiments of the present invention, other and further embodiments of the present invention may be devised without departing from the basic scope thereof. It is understood that various embodiments described herein may be utilized in combination with any other embodiment described, without departing from the scope contained herein. Further, the foregoing description is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. Certain exemplary embodiments may be identified by use of an open-ended list that includes wording to indicate that the list items are representative of the embodiments and that the list is not intended to represent a closed list exclusive of further embodiments. Such wording may include “e.g.,” “etc.,” “such as,” “for example,” “and so forth,” “and the like,” etc., and other wording as will be apparent from the surrounding context.
  • No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. Further, the terms “any of” followed by a listing of a plurality of items and/or a plurality of categories of items, as used herein, are intended to include “any of,” “any combination of,” “any multiple of,” and/or “any combination of multiples of” the items and/or the categories of items, individually or in conjunction with other items and/or other categories of items.
  • Moreover, the claims should not be read as limited to the described order or elements unless stated to that effect. In addition, use of the term “means” in any claim is intended to invoke 35 U.S.C. §112, ¶6, and any claim without the word “means” is not so intended.

Claims (17)

What is claimed is:
1. A building support with a concealed electronic component for a structure, comprising:
a rigid support member;
a mounting attachment affixed to the rigid support member, the mounting attachment adapted to support the concealed electronic component; and
a transceiver coupled to the concealed electronic component, the transceiver adapted to support an external communication link.
2. The building support of claim 1, wherein the building support comprises a wall stud.
3. The building support of claim 1, wherein the building support comprises a floor joist.
4. The building support of claim 1, wherein the building support comprises a heating and cooling duct.
5. The building support of claim 1, wherein the building support comprises a concrete slab, the concrete slab comprising a void adapted to enclose the concealed electronic component.
6. The building support of claim 1, wherein the building support is positioned in an area above a false ceiling.
7. The building support of claim 1, wherein the building support is positioned in an area below a raised floorboard.
8. The building support of claim 1, wherein the building support comprises an item selected from a group consisting of a hollow core door, a rooftop, an outdoor structure, an element of a garage, an element of a shed.
9. A backing material to support an electronic component concealed within a building structural element, wherein the building structural element comprises one or more rigid building support members, the backing material comprising:
a substrate;
a structure attachment along at least one surface of the substrate, the attachment adapted to attach the substrate to a rigid building support member; and
one or more electronic component attachments disposed on a major surface of the rigid substrate.
10. The backing material of claim 9, wherein the substrate comprises a flexible substrate, wherein the substrate when draped between adjacent rigid building support members, is adapted to substantially maintain a predetermined relative physical configuration among a plurality of electronic components coupled to the one or more electronic component attachments.
11. The backing material of claim 9, wherein the substrate comprises a substantially rigid substrate.
12. The backing material of claim 9, wherein the rigid building support member comprises a wall stud.
13. The backing material of claim 9, wherein the substrate is adapted to be supported by one rigid building support member.
14. The backing material of claim 9, wherein the substrate may be coupled to a second substantially rigid substrate.
15. The backing material of claim 9, wherein the electronic component comprises a data storage device.
16. The backing material of claim 10, wherein the plurality of electronic components comprise a data farm.
17. The backing material of claim 16, wherein the plurality of electronic components further comprise a transceiver adapted to support an external communication link.
US14/242,905 2013-02-21 2014-04-02 Building support with concealed electronic component for a structure Pending US20140230342A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/242,905 US20140230342A1 (en) 2013-02-21 2014-04-02 Building support with concealed electronic component for a structure

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201313772853A 2013-02-21 2013-02-21
US14/242,905 US20140230342A1 (en) 2013-02-21 2014-04-02 Building support with concealed electronic component for a structure

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US201313772853A Continuation 2013-02-21 2013-02-21

Publications (1)

Publication Number Publication Date
US20140230342A1 true US20140230342A1 (en) 2014-08-21

Family

ID=51350094

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/242,905 Pending US20140230342A1 (en) 2013-02-21 2014-04-02 Building support with concealed electronic component for a structure

Country Status (7)

Country Link
US (1) US20140230342A1 (en)
EP (2) EP2946049B1 (en)
JP (2) JP6505024B2 (en)
CN (2) CN105121760B (en)
CA (1) CA2901294C (en)
DK (1) DK2946049T3 (en)
WO (1) WO2014130831A2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150067155A1 (en) * 2013-08-29 2015-03-05 Tune, Inc. Systems and methods for measuring approximate engagement of users in a software application
US20180350226A1 (en) * 2014-01-03 2018-12-06 Intwine Connect, Llc Connected gateway
US10305164B1 (en) 2015-10-30 2019-05-28 Tessco Communications Incoporated Gang junction box antenna enclosure and antenna assembly
US11392544B2 (en) 2018-02-06 2022-07-19 Samsung Electronics Co., Ltd. System and method for leveraging key-value storage to efficiently store data and metadata in a distributed file system
US20220337550A1 (en) * 2021-04-19 2022-10-20 Applied Invention, Llc Physically secured network access control devices and systems

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210383784A1 (en) * 2018-11-09 2021-12-09 3M Innovative Properties Company Blanking panels including acoustic absorbing materials

Citations (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4902853A (en) * 1987-08-03 1990-02-20 Societe d'Adminiatration et de Realisationsd'Investissements (SARI) Electrical or like energy transport installation and method of implementing same
US5172530A (en) * 1990-11-06 1992-12-22 Allsteel Inc. Sound attenuation panel arrangement with cabling accommodating capability for office furniture space divider systems
US5257011A (en) * 1991-12-03 1993-10-26 Avid Corporation Data altering means for multi-memory electronic identification tag
US5424859A (en) * 1992-09-24 1995-06-13 Nippon Telegraph And Telephone Corp. Transceiver for wireless in-building communication sytem
US5456433A (en) * 1994-01-05 1995-10-10 Burns; James M. Antenna roof mounting
US5465082A (en) * 1990-07-27 1995-11-07 Executone Information Systems, Inc. Apparatus for automating routine communication in a facility
US5619217A (en) * 1995-05-19 1997-04-08 Allen Telecom Group, Inc. Cellular and PCS antenna mounting assembly
US5708659A (en) * 1993-10-20 1998-01-13 Lsi Logic Corporation Method for hashing in a packet network switching system
US5778612A (en) * 1990-11-28 1998-07-14 Kissinger; Terrance G. Partition panel containing data processing or communications equipment
US6222503B1 (en) * 1997-01-10 2001-04-24 William Gietema System and method of integrating and concealing antennas, antenna subsystems and communications subsystems
US6375073B1 (en) * 1996-08-29 2002-04-23 Swisscom Morris Ag Method for crediting or recrediting a data card with a given amount
US6381596B1 (en) * 1997-03-06 2002-04-30 Rudolf Bayer Database system and method for operating a database system
US6388190B1 (en) * 1999-10-29 2002-05-14 Dekko Engineering, Inc. Modular power assembly with switch
US6399883B1 (en) * 2000-06-20 2002-06-04 Thomas E. Lhota Channel for low voltage electrical wiring
US6715246B1 (en) * 1999-08-10 2004-04-06 Armstrong World Industries, Inc. Ceiling tile transmitter and receiver system
US6883710B2 (en) * 2000-10-11 2005-04-26 Amerasia International Technology, Inc. Article tracking system and method
US20050086876A1 (en) * 2003-10-27 2005-04-28 Hufcor, Inc. Flexible space management system and method
US20050114464A1 (en) * 2003-10-27 2005-05-26 Shai Amir Virtualization switch and method for performing virtualization in the data-path
US20050178074A1 (en) * 2004-02-02 2005-08-18 Kerosetz Jay E. Multifunction table
US20050257456A1 (en) * 2004-05-07 2005-11-24 Biffis Lou G Homes and home construction
US20060029093A1 (en) * 2004-08-09 2006-02-09 Cedric Van Rossum Multimedia system over electronic network and method of use
US20060032527A1 (en) * 2004-07-27 2006-02-16 Spheral Solar Power Inc. Solar panel overlay and solar panel overlay assembly
US7039366B1 (en) * 2004-04-01 2006-05-02 Cetacea Sound, Inc. Antenna and access point mounting system and method
US7124173B2 (en) * 2001-04-30 2006-10-17 Moriarty Kathleen M Method and apparatus for intercepting performance metric packets for improved security and intrusion detection
US20070199262A1 (en) * 2004-06-03 2007-08-30 Hill-Rom Services, Inc. Patient Care Modules For Hospital Walls
US7373340B2 (en) * 2003-08-27 2008-05-13 Sap Aktiengesellschaft Computer implemented method and according computer program product for storing data sets in and retrieving data sets from a data storage system
US7482536B2 (en) * 2002-07-31 2009-01-27 Hewlett-Packard Development Company, L.P. Underfloor cable junction unit and computer center equipped with such junction units
US20090094911A1 (en) * 2007-10-10 2009-04-16 Whitaker Milan M Fly-by-box
US20090144220A1 (en) * 2007-11-30 2009-06-04 Yahoo! Inc. System for storing distributed hashtables
US20090234967A1 (en) * 2008-03-17 2009-09-17 Nokia Corporation Method, system, and apparatus for transferring P2P file distribution tasks between devices
US7642988B1 (en) * 2006-06-19 2010-01-05 Sprint Communications Company L.P. Multi-link antenna array configured for cellular site placement
US7640699B2 (en) * 2003-10-29 2010-01-05 Gilleran William J Air conditioning line flashing panel
US20100043309A1 (en) * 2008-08-20 2010-02-25 Strategic Solutions Unlimited, Inc. Expeditionary pod construction
US7738251B2 (en) * 2006-06-01 2010-06-15 Google Inc. Modular computing environments
US20100208590A1 (en) * 2009-02-13 2010-08-19 Alcatel-Lucent Peer-to-peer traffic management based on key presence in peer-to-eer control transfers
US20100236154A1 (en) * 2009-03-18 2010-09-23 Sdk, Llc Lighting and shade components for building exteriors
US20100306286A1 (en) * 2009-03-05 2010-12-02 Chi-Hsien Chiu Distributed steam processing
US7845128B2 (en) * 2008-09-23 2010-12-07 Architectural Glass And Aluminum Corporation, Inc. Unitized building integrated photovoltaic conversion module
US7858878B2 (en) * 2008-09-15 2010-12-28 Honeywell International Inc. Flush mount-corner mount gang box
US7945689B2 (en) * 2007-03-23 2011-05-17 Sony Corporation Method and apparatus for transferring files to clients using a peer-to-peer file transfer model and a client-server transfer model
US7948951B2 (en) * 2002-06-12 2011-05-24 Xocyst Transfer Ag L.L.C. Automatic peer discovery
US20110162291A1 (en) * 2009-11-05 2011-07-07 Robert Hilliard Wall panel for mounting miniature computing device
US20110308179A1 (en) * 2010-06-17 2011-12-22 Hepacart, Inc. Ceiling wiring access point device
US8090813B2 (en) * 2006-09-19 2012-01-03 Solid State Networks, Inc. Methods and apparatus for data transfer
US20120161938A1 (en) * 2009-08-05 2012-06-28 Dieter Kilian Receiving device with rfid detection of built-in components held therein, and rfid detection method
US8215066B2 (en) * 2010-11-29 2012-07-10 Qatar Football Association Multi-layer, revolving stadium roof
US8224862B2 (en) * 2007-01-12 2012-07-17 Geni, Inc. System and method for providing a networked viral family tree
US8271992B2 (en) * 2007-08-29 2012-09-18 Nirvanix, Inc. Load based file allocation among a plurality of storage devices
US8296398B2 (en) * 2008-04-29 2012-10-23 Overland Storage, Inc. Peer-to-peer redundant file server system and methods
US20120317893A1 (en) * 2010-12-17 2012-12-20 Murphy Thomas M Security Systems and Methods of Using Same
US8336261B2 (en) * 2010-11-29 2012-12-25 Qatar Football Association Revolving roof for an indoor/outdoor stadium
US8352430B1 (en) * 2009-11-06 2013-01-08 Carbonite, Inc. File storage system to support high data rates
US8371540B1 (en) * 2010-05-13 2013-02-12 Lewis B. Medlin, Jr. Cable positioning bracket
US8387318B2 (en) * 2009-08-06 2013-03-05 Watts LLC Open-air parking shelter with photovoltaic elements and improved airflow characteristics
US8387315B2 (en) * 2010-11-29 2013-03-05 Qatar Football Association Microclimate cooling system for an indoor/outdoor stadium
US20130275363A1 (en) * 2012-04-17 2013-10-17 Mingxi Wu Meta-data driven data ingestion using mapreduce framework
US20130304427A1 (en) * 2012-05-13 2013-11-14 Analogue Studio, Llc Unitized Cladding System, Method, and Software Program
US20130318887A1 (en) * 2010-12-17 2013-12-05 Thomas M. Murphy Wall Capping Systems and Methods of Using Same
US20130340361A1 (en) * 2009-06-03 2013-12-26 Bripco Bvba Data Centre
US8644844B2 (en) * 2007-12-20 2014-02-04 Corning Mobileaccess Ltd. Extending outdoor location based services and applications into enclosed areas
US20140062276A1 (en) * 2012-09-06 2014-03-06 Troy Alan Wyman Container assembly mountable to a door
US20140182518A1 (en) * 2012-12-27 2014-07-03 Thomas Boehm Systems, Devices, and/or Methods for Feeding Birds
US20140215257A1 (en) * 2013-01-30 2014-07-31 Ebay Inc Daisy chain distribution in data centers
US20140268601A1 (en) * 2012-06-01 2014-09-18 Landauer, Inc. Filter Packaging for System for Wireless, Motion and Position-Sensing, Integrating Radiation Sensor for Occupational and Environmental Dosimetry
US8839569B2 (en) * 2012-03-12 2014-09-23 Compass Datacenters, Llc Truly modular building datacenter facility
US8931221B2 (en) * 2012-11-21 2015-01-13 Google Inc. Alternative data center building designs
US8988222B2 (en) * 2007-01-18 2015-03-24 James Neil Rodgers Stolen bicycle (missing chattel) identification, tracking and location; a system and method
US9179316B2 (en) * 2009-01-28 2015-11-03 Headwater Partners I Llc Mobile device with user controls and policy agent to control application access to device location data
US9317823B2 (en) * 2000-01-24 2016-04-19 Nextreme, Llc Embedded RFID device for tracking a reusable material handling device
US9372870B1 (en) * 2003-01-21 2016-06-21 Peer Fusion, Inc. Peer to peer code generator and decoder for digital systems and cluster storage system
US9413630B2 (en) * 2010-10-04 2016-08-09 Avocent Huntsville Corp. Remote access appliance having MSS functionality
US9411976B2 (en) * 2006-12-01 2016-08-09 Maidsafe Foundation Communication system and method
US9787692B2 (en) * 2008-03-18 2017-10-10 Reduxio Systems Ltd. Network storage system for a download intensive environment
US20180020505A1 (en) * 2014-02-14 2018-01-18 ATOM, Inc. Systems and Methods For Customizing Hotel, Timeshare, and Rental Property Guest Experiences, and Conserving Resources and Utilities Using Internet Of Things Devices and Location Tracking Augmented With Contextual Awareness
US11658929B2 (en) * 2007-06-28 2023-05-23 Voxer Ip Llc Telecommunication and multimedia management method and apparatus

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030097806A1 (en) * 1996-03-05 2003-05-29 Brown John G. Inner accessible commutering enterprise structure interfaced with one or more workplace, vehicle or home commutering stations
JPH11200538A (en) * 1998-01-14 1999-07-27 Sekisui Chem Co Ltd Housing room for equipment instrument
JPH11346851A (en) * 1998-06-09 1999-12-21 Sekisui Chem Co Ltd Housing box and building unit
EP1114231A4 (en) * 1999-06-17 2008-06-04 Autowin Corp Remote controllable device for opening/closing of a window
JP3969557B2 (en) * 1999-10-19 2007-09-05 株式会社リコー Intelligent partition
US6839815B2 (en) * 2001-05-07 2005-01-04 Hitachi, Ltd. System and method for storage on demand service in a global SAN environment
JP4856822B2 (en) * 2001-06-14 2012-01-18 株式会社知的未来 Security system
JP4432712B2 (en) * 2004-10-05 2010-03-17 富士電機リテイルシステムズ株式会社 Wall panel structure
JP4734156B2 (en) * 2006-03-29 2011-07-27 ミサワホーム株式会社 Partition wall panel for mounting electrical equipment
US8171115B2 (en) * 2008-03-18 2012-05-01 Microsoft Corporation Resource equalization for inter- and intra- data center operations
US9614924B2 (en) * 2008-12-22 2017-04-04 Ctera Networks Ltd. Storage device and method thereof for integrating network attached storage with cloud storage services
US8077457B2 (en) * 2009-02-27 2011-12-13 Microsoft Corporation Modularization of data center functions
CN102071818A (en) * 2009-11-25 2011-05-25 中国移动通信集团广东有限公司深圳分公司 Anti-theft communication machine room with spliced concrete interlayer
CN102804945B (en) * 2010-03-22 2016-03-16 惠普发展公司,有限责任合伙企业 Modular data center
US8751907B2 (en) * 2010-09-14 2014-06-10 King Saud University Joint encoding and decoding methods for improving the error rate performance
KR101544483B1 (en) * 2011-04-13 2015-08-17 주식회사 케이티 Replication server apparatus and method for creating replica in distribution storage system

Patent Citations (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4902853A (en) * 1987-08-03 1990-02-20 Societe d'Adminiatration et de Realisationsd'Investissements (SARI) Electrical or like energy transport installation and method of implementing same
US5465082A (en) * 1990-07-27 1995-11-07 Executone Information Systems, Inc. Apparatus for automating routine communication in a facility
US5172530A (en) * 1990-11-06 1992-12-22 Allsteel Inc. Sound attenuation panel arrangement with cabling accommodating capability for office furniture space divider systems
US5778612A (en) * 1990-11-28 1998-07-14 Kissinger; Terrance G. Partition panel containing data processing or communications equipment
US5257011A (en) * 1991-12-03 1993-10-26 Avid Corporation Data altering means for multi-memory electronic identification tag
US5424859A (en) * 1992-09-24 1995-06-13 Nippon Telegraph And Telephone Corp. Transceiver for wireless in-building communication sytem
US5708659A (en) * 1993-10-20 1998-01-13 Lsi Logic Corporation Method for hashing in a packet network switching system
US5456433A (en) * 1994-01-05 1995-10-10 Burns; James M. Antenna roof mounting
US5619217A (en) * 1995-05-19 1997-04-08 Allen Telecom Group, Inc. Cellular and PCS antenna mounting assembly
US6375073B1 (en) * 1996-08-29 2002-04-23 Swisscom Morris Ag Method for crediting or recrediting a data card with a given amount
US6222503B1 (en) * 1997-01-10 2001-04-24 William Gietema System and method of integrating and concealing antennas, antenna subsystems and communications subsystems
US6381596B1 (en) * 1997-03-06 2002-04-30 Rudolf Bayer Database system and method for operating a database system
US6715246B1 (en) * 1999-08-10 2004-04-06 Armstrong World Industries, Inc. Ceiling tile transmitter and receiver system
US6388190B1 (en) * 1999-10-29 2002-05-14 Dekko Engineering, Inc. Modular power assembly with switch
US9317823B2 (en) * 2000-01-24 2016-04-19 Nextreme, Llc Embedded RFID device for tracking a reusable material handling device
US6399883B1 (en) * 2000-06-20 2002-06-04 Thomas E. Lhota Channel for low voltage electrical wiring
US6883710B2 (en) * 2000-10-11 2005-04-26 Amerasia International Technology, Inc. Article tracking system and method
US7124173B2 (en) * 2001-04-30 2006-10-17 Moriarty Kathleen M Method and apparatus for intercepting performance metric packets for improved security and intrusion detection
US7948951B2 (en) * 2002-06-12 2011-05-24 Xocyst Transfer Ag L.L.C. Automatic peer discovery
US7482536B2 (en) * 2002-07-31 2009-01-27 Hewlett-Packard Development Company, L.P. Underfloor cable junction unit and computer center equipped with such junction units
US9372870B1 (en) * 2003-01-21 2016-06-21 Peer Fusion, Inc. Peer to peer code generator and decoder for digital systems and cluster storage system
US7373340B2 (en) * 2003-08-27 2008-05-13 Sap Aktiengesellschaft Computer implemented method and according computer program product for storing data sets in and retrieving data sets from a data storage system
US20050086876A1 (en) * 2003-10-27 2005-04-28 Hufcor, Inc. Flexible space management system and method
US20050114464A1 (en) * 2003-10-27 2005-05-26 Shai Amir Virtualization switch and method for performing virtualization in the data-path
US7640699B2 (en) * 2003-10-29 2010-01-05 Gilleran William J Air conditioning line flashing panel
US20050178074A1 (en) * 2004-02-02 2005-08-18 Kerosetz Jay E. Multifunction table
US7039366B1 (en) * 2004-04-01 2006-05-02 Cetacea Sound, Inc. Antenna and access point mounting system and method
US20050257456A1 (en) * 2004-05-07 2005-11-24 Biffis Lou G Homes and home construction
US20070199262A1 (en) * 2004-06-03 2007-08-30 Hill-Rom Services, Inc. Patient Care Modules For Hospital Walls
US20060032527A1 (en) * 2004-07-27 2006-02-16 Spheral Solar Power Inc. Solar panel overlay and solar panel overlay assembly
US20060029093A1 (en) * 2004-08-09 2006-02-09 Cedric Van Rossum Multimedia system over electronic network and method of use
US7738251B2 (en) * 2006-06-01 2010-06-15 Google Inc. Modular computing environments
US7642988B1 (en) * 2006-06-19 2010-01-05 Sprint Communications Company L.P. Multi-link antenna array configured for cellular site placement
US8090813B2 (en) * 2006-09-19 2012-01-03 Solid State Networks, Inc. Methods and apparatus for data transfer
US9411976B2 (en) * 2006-12-01 2016-08-09 Maidsafe Foundation Communication system and method
US8224862B2 (en) * 2007-01-12 2012-07-17 Geni, Inc. System and method for providing a networked viral family tree
US8988222B2 (en) * 2007-01-18 2015-03-24 James Neil Rodgers Stolen bicycle (missing chattel) identification, tracking and location; a system and method
US7945689B2 (en) * 2007-03-23 2011-05-17 Sony Corporation Method and apparatus for transferring files to clients using a peer-to-peer file transfer model and a client-server transfer model
US11658929B2 (en) * 2007-06-28 2023-05-23 Voxer Ip Llc Telecommunication and multimedia management method and apparatus
US8271992B2 (en) * 2007-08-29 2012-09-18 Nirvanix, Inc. Load based file allocation among a plurality of storage devices
US20090094911A1 (en) * 2007-10-10 2009-04-16 Whitaker Milan M Fly-by-box
US20090144220A1 (en) * 2007-11-30 2009-06-04 Yahoo! Inc. System for storing distributed hashtables
US8644844B2 (en) * 2007-12-20 2014-02-04 Corning Mobileaccess Ltd. Extending outdoor location based services and applications into enclosed areas
US20090234967A1 (en) * 2008-03-17 2009-09-17 Nokia Corporation Method, system, and apparatus for transferring P2P file distribution tasks between devices
US9787692B2 (en) * 2008-03-18 2017-10-10 Reduxio Systems Ltd. Network storage system for a download intensive environment
US8296398B2 (en) * 2008-04-29 2012-10-23 Overland Storage, Inc. Peer-to-peer redundant file server system and methods
US20100043309A1 (en) * 2008-08-20 2010-02-25 Strategic Solutions Unlimited, Inc. Expeditionary pod construction
US7858878B2 (en) * 2008-09-15 2010-12-28 Honeywell International Inc. Flush mount-corner mount gang box
US7845128B2 (en) * 2008-09-23 2010-12-07 Architectural Glass And Aluminum Corporation, Inc. Unitized building integrated photovoltaic conversion module
US9179316B2 (en) * 2009-01-28 2015-11-03 Headwater Partners I Llc Mobile device with user controls and policy agent to control application access to device location data
US20100208590A1 (en) * 2009-02-13 2010-08-19 Alcatel-Lucent Peer-to-peer traffic management based on key presence in peer-to-eer control transfers
US20100306286A1 (en) * 2009-03-05 2010-12-02 Chi-Hsien Chiu Distributed steam processing
US20100236154A1 (en) * 2009-03-18 2010-09-23 Sdk, Llc Lighting and shade components for building exteriors
US20130340361A1 (en) * 2009-06-03 2013-12-26 Bripco Bvba Data Centre
US20120161938A1 (en) * 2009-08-05 2012-06-28 Dieter Kilian Receiving device with rfid detection of built-in components held therein, and rfid detection method
US8387318B2 (en) * 2009-08-06 2013-03-05 Watts LLC Open-air parking shelter with photovoltaic elements and improved airflow characteristics
US20110162291A1 (en) * 2009-11-05 2011-07-07 Robert Hilliard Wall panel for mounting miniature computing device
US8352430B1 (en) * 2009-11-06 2013-01-08 Carbonite, Inc. File storage system to support high data rates
US8371540B1 (en) * 2010-05-13 2013-02-12 Lewis B. Medlin, Jr. Cable positioning bracket
US20110308179A1 (en) * 2010-06-17 2011-12-22 Hepacart, Inc. Ceiling wiring access point device
US9413630B2 (en) * 2010-10-04 2016-08-09 Avocent Huntsville Corp. Remote access appliance having MSS functionality
US8336261B2 (en) * 2010-11-29 2012-12-25 Qatar Football Association Revolving roof for an indoor/outdoor stadium
US8387315B2 (en) * 2010-11-29 2013-03-05 Qatar Football Association Microclimate cooling system for an indoor/outdoor stadium
US8215066B2 (en) * 2010-11-29 2012-07-10 Qatar Football Association Multi-layer, revolving stadium roof
US20120317893A1 (en) * 2010-12-17 2012-12-20 Murphy Thomas M Security Systems and Methods of Using Same
US20130318887A1 (en) * 2010-12-17 2013-12-05 Thomas M. Murphy Wall Capping Systems and Methods of Using Same
US8839569B2 (en) * 2012-03-12 2014-09-23 Compass Datacenters, Llc Truly modular building datacenter facility
US20130275363A1 (en) * 2012-04-17 2013-10-17 Mingxi Wu Meta-data driven data ingestion using mapreduce framework
US20130304427A1 (en) * 2012-05-13 2013-11-14 Analogue Studio, Llc Unitized Cladding System, Method, and Software Program
US20140268601A1 (en) * 2012-06-01 2014-09-18 Landauer, Inc. Filter Packaging for System for Wireless, Motion and Position-Sensing, Integrating Radiation Sensor for Occupational and Environmental Dosimetry
US20140062276A1 (en) * 2012-09-06 2014-03-06 Troy Alan Wyman Container assembly mountable to a door
US8931221B2 (en) * 2012-11-21 2015-01-13 Google Inc. Alternative data center building designs
US20140182518A1 (en) * 2012-12-27 2014-07-03 Thomas Boehm Systems, Devices, and/or Methods for Feeding Birds
US20140215257A1 (en) * 2013-01-30 2014-07-31 Ebay Inc Daisy chain distribution in data centers
US20180020505A1 (en) * 2014-02-14 2018-01-18 ATOM, Inc. Systems and Methods For Customizing Hotel, Timeshare, and Rental Property Guest Experiences, and Conserving Resources and Utilities Using Internet Of Things Devices and Location Tracking Augmented With Contextual Awareness

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150067155A1 (en) * 2013-08-29 2015-03-05 Tune, Inc. Systems and methods for measuring approximate engagement of users in a software application
US20180350226A1 (en) * 2014-01-03 2018-12-06 Intwine Connect, Llc Connected gateway
US10825336B2 (en) * 2014-01-03 2020-11-03 Intwine Connect, Llc Connected gateway
US11423766B2 (en) 2014-01-03 2022-08-23 Intwine Connect, Llc Connected gateway
US11749098B2 (en) 2014-01-03 2023-09-05 Intwine Connect, Llc Connected gateway
US10305164B1 (en) 2015-10-30 2019-05-28 Tessco Communications Incoporated Gang junction box antenna enclosure and antenna assembly
US11392544B2 (en) 2018-02-06 2022-07-19 Samsung Electronics Co., Ltd. System and method for leveraging key-value storage to efficiently store data and metadata in a distributed file system
US20220337550A1 (en) * 2021-04-19 2022-10-20 Applied Invention, Llc Physically secured network access control devices and systems

Also Published As

Publication number Publication date
DK2946049T3 (en) 2020-01-27
EP2946049B1 (en) 2019-11-06
CN109041502A (en) 2018-12-18
JP2016517552A (en) 2016-06-16
CN105121760B (en) 2018-09-25
EP2946049A2 (en) 2015-11-25
CA2901294A1 (en) 2014-08-28
JP6505024B2 (en) 2019-04-24
EP3627428A1 (en) 2020-03-25
JP6847145B2 (en) 2021-03-24
CN105121760A (en) 2015-12-02
JP2019117658A (en) 2019-07-18
WO2014130831A3 (en) 2015-03-26
EP2946049A4 (en) 2017-03-08
CA2901294C (en) 2020-06-30
WO2014130831A2 (en) 2014-08-28

Similar Documents

Publication Publication Date Title
JP6847145B2 (en) Building support with concealed electronic components for structures
US6854010B1 (en) Multi-location management system
CN102571916B (en) A kind of framework of renting software of cloud memory space and method of operation
CN103339611B (en) Remote access appliance having mss functionality
CN100450033C (en) Administration of access to computer resources on a network
CN104052789A (en) Load balancing for a virtual networking system
CN103748548A (en) Storage gateway activation process
CN103329063A (en) System and method for monitoring and managing data center resources in real time
CN103731462A (en) Remote storage system and method of network attached storage (NAS) device
CN101808139A (en) Data storage system in cloud environment
CN103685463A (en) Access control method and system in cloud computing system
US20120054712A1 (en) System and method for enabling creation and execution of localized applications in utility networks through device-specific programming
CN101977243A (en) Cloud data storage distribution (CDSD)
CN106056322A (en) Smart grid scheduling system based on cloud computing
CN106294842A (en) A kind of data interactive method, platform and distributed file system
CN102316122A (en) Method for managing intranet security based on cooperative mode
CN102088487B (en) Cloud computing storage server of high density disk
JP5420585B2 (en) Distributed data center consisting of container type or modular type data center
CN107613026A (en) Distributed file management system based on cloud storage system
CN115225664A (en) Construction method of meteorological information infrastructure resource cloud platform
Yan Cloud Storage Services
CN103929335A (en) Monitoring system of managing cloud host and monitoring method thereof
US10666724B1 (en) Geo-replicated IoT hub
CN102420856A (en) Data processing method and equipment
KR101431902B1 (en) Virtualized home network system and operating method for thereof

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCV Information on status: appeal procedure

Free format text: NOTICE OF APPEAL FILED

STCV Information on status: appeal procedure

Free format text: APPEAL BRIEF (OR SUPPLEMENTAL BRIEF) ENTERED AND FORWARDED TO EXAMINER

STCV Information on status: appeal procedure

Free format text: EXAMINER'S ANSWER TO APPEAL BRIEF MAILED

STCV Information on status: appeal procedure

Free format text: ON APPEAL -- AWAITING DECISION BY THE BOARD OF APPEALS

STCV Information on status: appeal procedure

Free format text: BOARD OF APPEALS DECISION RENDERED

STPP Information on status: patent application and granting procedure in general

Free format text: AMENDMENT / ARGUMENT AFTER BOARD OF APPEALS DECISION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCV Information on status: appeal procedure

Free format text: NOTICE OF APPEAL FILED

STCV Information on status: appeal procedure

Free format text: APPEAL BRIEF (OR SUPPLEMENTAL BRIEF) ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCV Information on status: appeal procedure

Free format text: NOTICE OF APPEAL FILED

STCV Information on status: appeal procedure

Free format text: APPEAL BRIEF (OR SUPPLEMENTAL BRIEF) ENTERED AND FORWARDED TO EXAMINER