In this article, we understand the concept of High availability from different perspectives.
High Availability for any kind of software or service is always determined by the kind of experience and expectation held by the end user. Downtime is usually characterized by multiple factors such as loss of information, decreased productivity, property damage etc. The purpose of high availability is primarily to reduce the downtime and its impacts as much as possible. Creating a sound strategy for this can help balance business processes and SLAs. The customers and stakeholders on the basis of their expectations and the agreement get to decide whether a platform is highly available or not.
The following method is used for calculating the availability of a given system.
The result of the given formula is often conveyed by the amount of ‘9’s in the answer, which is an indicator of the possible annual downtime or uptime.
High Availability is one of the sub concepts under the broader concept of Availability, which also consists of Partial and Degraded Availability. Sometimes a user might have to settle with something less than high availability but not exactly a complete outage, called partial availability. Things might get worse, and it will then become degraded availability providing the user with limited functionality. Apart from these, there are other varying degrees of availability, which are:
- Deferred Operations – During disaster recovery or maintenance window, you can retrieve data, but background processes and new workflows might be slow or halted.
- Data Latency – Due to access of workload, there might be a platform failure, hardware resources might be overloaded, and user may suffer and have a less productive experience.
- Impending Failures – The user might experience poor application response, in the form of failed application logic.
- Partial Failure – user may face outages, both horizontally and vertically, as a result of which there might be degraded performance or partial success. Whatever it is, will be determined by what all components have been affected by the outage.
Under the concept of Availability, you have to know and understand not only the types of availability, but also the different types of downtime. Broadly there are only two types of downtime, Planned and Unplanned. As the name suggests, planned downtime are those which are anticipated, they are not as harmful and shocking as an unplanned downtime. Outages are often a result of an unplanned failure; they are not only harmful and shocking but also lead to a lot of time waste, and probable data loss.
Whichever type of downtime may occur, the main aim during an outage is to get back the system online, with as little data loss as possible. There is a direct or indirect cost attached to every minute of downtime. For this reason, it is always necessary for the organization to figure out the reason behind the outage, in case on an unplanned downtime. Apart from this it should also attempt to determine the current system state and also find a solution to avoid it from happening again.
SQL Crashes Can Occur Between Backup Intervals
In most small to medium sized firms, backups are taken on a periodic manner which can even be a week in some cases. The data since the last backup is at risk in case you encounter a SQL Server crash. To recover your data you need to use a sophisticated mdf recovery tool like DataNumen SQL Recovery. Powered by an extraction engine capable of negotiating large files and handling different media types, this class leading tool can bring back all your data, even to the last record stored on the system before the crash.
Alan Chen is President & Chairman of DataNumen, Inc., which is the world leader in data recovery technologies, including access recovery and sql recovery software products. For more information visit www.datanumen.com