The following white paper describes a new approach to PostgreSQL database isolation which directly affects complete backup and restore of databases, total database isolation, security and disaster recovery capabilities.
The following white paper describes a new approach to PostgreSQL database isolation which directly affects complete backup and restore of databases, total database isolation, security and disaster recovery capabilities. In addition, this document highlights the concept behind this approach for decision makers rather than explaining the technical implementation for IT professionals. Some technical issues would be mentioned here as well.
For the last five years, I hear many ideas and thoughts about isolating a database as a single isolated entity which not only includes the data but also includes the users and permissions, schemas, views/procedures and even its own storage space. So, why should an organization isolate its entire database? Well… the answer is pretty and fairly simple: better control of IT assets. When each database is isolated, you can control IT assets better by adjusting infrastructural changes to a single database only and not for the entire server. Let`s imagine a given situation in which a company holds twenty databases on a single server. In most cases, the storage (we will discuss all of the IT assets later on) is assumed to be a joint one, meaning that all of the databases are stored in the same spindle device. When an error occurs to the spindle device, it might take some time to repair the malfunction or even change disks which will compromise ALL of the databases availability together. The general concept thrives to isolate the storage for each database but that is not always possible. Imagine that you could give each database its own spindle device, and one of them fails… what then happens to the other databases? Are they still active and available? The answer is: YES! As well as the phrase “time is money” means that time is money… this concept says that “data is money”, which means that whenever you database is down, you lose money. Imagine that the database for your CRM application is down for 2 hours… what would be the loss of sales for that time? How would that affect your possible sales in favor for your competitors? When we say “time is money”… we actually mean that “UP-TIME is money”. The database approach is born from that sole understanding and thrive for up-time and database availability.
The first step of database isolation begins with the storage level. Although it is well known that not every database could have its own spindle device, especially when each IT department is implementing different RAID levels to secure the storage as much as possible, the database isolation approach does recommend splitting the databases into several spindle devices. Another technique is to separate each database into an isolated storage units. The most common isolated storage units are volumes which allows the creation of several isolated spaces within the same spindle device. Each volume must have its own snapshot capabilities which enables the IT administrator snapshot the specific volume live. Each snapshot would then be considered to be a full backup of a database. There is a possible method of producing incremental backups to save some space (usually done with RSYNC) but in order to restore the database completely you would need all of the parts. In today`s world when hardware is not that expansive and before, you would be better of paying for more spindles and have better control of your backups and what they store. Best practices recommend saving seven days back, you can now do the math on how much storage you need. Let`s take a test case in which you have ten databases and each one takes 500GB, which is a common case today. You would then need 500GB for each day, which sums up to 3.5TB for one week of backups, per database. I`m sure that by now you realize how low the prices of storage that you will need for your database, even of you implemented a RAID-1 mirroring.
This is where the “fun stuff” comes in hand. Adjusting RAM sizes to each database provides high flexible performance configurations and accompanies database isolation. Traditional database systems does not limit database RAM usage. Some database systems may limit the total use of RAM on a server level… but not in a database level. Limiting database RAM usage may be useful, especially when you do not know who the most expensive database (RAM speaking) is.
Technically speaking you can achieve all of the desired capabilities by using LXC (Linux Containers). Although not all database systems runs on linux, a linux container provides all of the functionality you are looking for in order to isolate your databases. Some of you IT guys may say “Hey, we use Unix jails!” which is also fine. Without entering the bits and bytes, unix jails and LXC are basically alike. When you are using unix systems (such as FreeBSD for example), you do have an added bonus for storage level management which is the usage of OpenZFS which is implemented in FreeBSD. Generally speaking, when you create a linux container, you are able to define two main features: (1) disk size (total for this container) and (2) max RAM usage. But, and that’s a big one, the REAL prize here is the ability to separate each database and the ability to snapshot the whole container. The snapshot you take contains all of the database configurations, both of the database system and of the operating system. Thus, restoring from a snapshot would not drag your IT professionals into system administration tasks. Taking live snapshots periodically will be considered as full backups as we discussed earlier.
One of best features of OpenZFS, in my humble opinion, is the ability to make a live replica of the storage volume. That means that I can replicate the entire volume while the database is still alive and running. Given that OpenZFS can also create an incremental snapshot and send it to another volume, means that I can create as many masters/slaves of the database as I wish (we will talk about the networking layer in a minute) by creating a relatively simple bash script. No need to special 3rd party tools!
After reading about the database and the storage issues, one might say right now: “Hey… what about the networking and load balancing of the master/master cluster?” Well, the question is right on target! The short answer (without getting into the bits and bytes here) is Yes… you will have to create a networking layer but most of the work is already done for you, if you are willing to use pgBouncer which already handles different and remote database sources and exposes their databases on a single machine as if they are all installed here. This would give you the ability to treat the pgBouncer machine as if it was any virtual IP on your network.
As any IT department requires, monitoring is one of the first requirement. By isolating the databases into separate containers, provides the IT department to monitor a single database the same way they would monitor a single machine. This fact eases the monitoring adjustments IT departments would need to do for the implementation and use of isolated PostgreSQL databases.
This article handled the introduction to database isolation as a pre-emptive measure for business continuity. The implementation process is not always simple and involves great IT knowledge. Planning the implementation is one of the most important mission when it comes to database isolation. It not only specifies your current needs, it will also specify how you would grow, IT speaking. That is why this task must be done by professionals. The implementation process plans not only the database requirements but also the management requirements. From the general project budget to the end-user web UI which enables the user the control of the server which holds the isolated databases. Please contact us for more information on how you can implement database isolation.
The Microsoft SQL Server 2017 is geared towards making SQL Server a platform that will enable users to make their choice of development languages, on-premises or cloud, data types, and operating systems.
מודל ה - (CSP ( Cloud Solution Provider הינו מודל רישוי המותאם לעולם הענן. הוא מתבסס על תשלום חודשי לפי שימוש, עם גמישות ושקיפות מקסימליים.