The amount of data that organisations must store is increasing rapidly, and the costs of doing so are for some organisations spiralling out of control. Managing these costs is essential for any business that wishes to remain competitive or even in business at all. It is not only how much data is stored that has cost implications, it is also how it is stored.
In computer science there are essentially four levels in the storage hierarchy. These are primary storage, secondary storage, tertiary storage and off-line storage. The higher up in the hierarchy, the faster the data can be accessed, in other words the greater the bandwidth. It also has the highest costs which are generally measured in terms of cost per bit.
The term ‘primary storage’ is used in the field of computer science to mean data that is directly accessible by the central processing unit (CPU). It is often called simply memory or main storage. While there are many different primary storage devices, essentially it refers to data that is stored in the form of an electrical charge in a microelectronic device such as a silicon chip.
However in IT primary storage also refers to data that is actively used, not just by a CPU. In terms of computer science this is known as secondary storage and refers to data that is not accessible directly to the CPU. Generally this is data stored on hard drives. The latency (the time it takes to access it) is many orders of magnitude higher than with primary storage.
Tertiary storage uses such devices as magnetic tapes and optical disks for storing large volumes of memory. When the information is required, the computer will consult a look up table to find where it is located. It will then load the device, read the required information, and then unload the. Typical latencies of tertiary storage are several seconds.
Off-line storage refers to data that is stored on devices that cannot be accessed directly by a CPU. These devices, for instance CDROMs and magnetic tapes, must be loaded manually before they can be accessed. Off line storage can be transported physically and can be used to back up data in a remote location to enable disaster recovery should the original data be destroyed say by a physical catastrophe such as a fire or terrorist attack.
While there is a blurring of terms between storage hierarchies as used in computer science and as referred to in terms of IT, the higher up in the hierarchy the more expensive it is.
One of the burgeoning problems is that far too much corporate data is stored on primary storage devices such as central servers, laptops and PCs. Simply moving data that is not actively required it to secondary storage facilities and devices can save a considerable amount of money. A simple and effective solution is cloud archiving, for instance using a cloud based storage and data archive facility such as the model developed by Mimecast and described on their website mimecast.com.
- 128GB Memory Storage for Mass Mobile Device Market – Samsung
- Cold Aisle Containment – A Cost Effective Cooling Option for Data Centres
- Best Free Data Recovery Software You Need – Free Download
- How Does the Internet Actually Work?
- What is the FMRI – Functional Magnetic Resonance Imaging
- The 5 Essential Functions of Business Intelligence
- Five Great Ideas To Get Your Kids Interested In Programming
Category: Telecom News