Stanislav Dzurík
Many companies now own and manage tens to hundreds of IT systems often literally kilometres away from eachother. These systems are often very powerful, with huge capacities and powerful peripheral equipment designed for data back-up or archives. Servers communicate with each other via LAN and WAN nets.
But there are problems. Data stored in such a way can only be read by identical platforms and their clients. As a result, data must be copied, upgraded, synchronized, or possibly migrated many times, and it must be stored at each platform or each location separately, at great risk of degradation of communication in LAN and WAN nets of medium-sized and larger companies. There is also a risk of database inconsistencies and major problems in data sharing in a heterogeneous environment.
System administrators are under no less stress, having to manage this wide spectrum of different system platforms. With today's boom in DBs, the time needed to rid backups or for data reconstruction is growing unbearably. At the same time, the local network transmission decreases wholesale during data back-ups or reconstruction, and partial or total breaks in the operation of production systems are needed.
For data to be resistant against hardware failures, they are placed onto internal or external fields, frequently based on ultra wide SCSI technology. Since the SCSI technology is based on arbitrated ZBERNICA, with a rising number of connections, the performance of this kind of disc field decreases. Another restriction of SCSI technology is the maximum length of the SCSI ZBERNICA, being 25 metres for differencial SCSI. Problems are also encountered in the operative dislodging of disc capacity from one connected to a platform, to another one at a different platform, at the time needing more disc capacity. With regard to the fact that disc space is connected directly to servers, it is impossible to allocate it to other platforms, resulting in great management, consolidation and effective use requirements. This results in increased investment in sufficient disc space and back-up equipment.
All these factors affect data migration, or its upgrading to up-to-date versions when employees are faced with a deficit of disc capacities and servers. The problem is that applications need to be migrated and function-tested in a test environment while the functioning of the sharp system has to be maintained, and the old system shut down and the new turned on very quickly.
So how can these problems be solved? The answer is simple: the implementation of new technologies into existing information systems, the creation of Storage Area Network (SAN) and the installation of powerful management tools to operate and run processes in such networks.
But the creation of this solution is not so simple and requires close cooperation between the supplier and the client. SAN is a dedicated, high speed, data network infrastructure, mutually providing data access to all hosts, enabling consolidation of storage modules, data sharing and centralised management, improvement of data back-up, reconstruction, migration, accessibility and resistance against failures, and increases in the speed of data access. It also upgrades hardware without operation interruptions.
Even though it is new technology, leading companies in the field of storage offer complete portfolios of the products needed to build SAN.
Next time: How to build SAN, what technologies it uses and the contribution of SAN to IT.
Stanislav Dzurík is an IT consultant at Columbex International. Comments and questions can be sent to his email address at: dzurik@columbex.sk