A Nimble Oracle RAC build

Our database consultants are often asked for advice on hardware, and sometimes they’re asked to architect and install it, too.

This was the case for a recent Oracle Real Application Cluster (RAC), which we built using Nimble flash storage.

This was the case for a recent Oracle Real Application Cluster (RAC), which we built using Nimble flash storage.

It was for a well-known food manufacturer who needed a database that was resilient and highly available. It also needed to be reasonably high performance. We spoke to Simon Lane, Senior Technical Consultant for Xynomix, about his experience of a Nimble build for an Oracle RAC database.

What was the initial scope of the project?

The project was to build a two stretch Real Application Cluster (RAC) environments across two data centres with storage hosted at each data centre. This would comprise two active-active databases and four WebLogic application servers in active-passive mode.

What do you mean by active-passive?

It’s about failover. Active-active means two adaptive servers are running as companions on a primary and secondary node, so when failover occurs, the secondary server takes over. Active-passive has one adaptive server on the primary node, which is transferred to the secondary node and restarted should a failover event occur.

As well as the Oracle stack, the system required infrastructure servers such as a DNS and a time server. The requirement for all this on two physical servers meant we would need to use a hypervisor. Oracle Virtual Machine was used as the virtualisation solution.

The whole stack uses Oracle Enterprise Linux running UEK on HP servers.

There are many possible storage configurations that offer high performance and availability. What made you decide to use Nimble hardware?

For storage, we chose Nimble based on our experience of its reliability and performance, but also for value: generally, the total cost of ownership is lower than similar alternatives, such as RAID.

Also, a major factor in our decision was their support package. Nimble’s support includes cloud-based predictive analytics and monitoring, automatic health and status alerts, and very fast, comprehensive support from Nimble themselves whenever an issue arises. This support includes direct tunnelling to the storage which can be useful.

The solution uses a bonded 1Gbit iSCSI network with two VLANS on independent switches,  making it resilient against switch failure.

Is this the first time you’ve used Nimble storage?

It’s the first time that I’ve personally used it to create a hardware solution, but there are other consultants with Nimble experience at Xynomix.

What is the hardware that you used?

Nimble hybrid CS200 with 8TB of magnetic disks, and a 160GB OS SSD, and 2 x HP DL380 with 128Gb RAM.

Did you learn any lessons? Is there anything you’d do differently on the next project?

Since network design, switch configuration and build is such a large part of this concept, we would use ‘pre-made’ switch configurations in future. It’s also important that for each major hardware or software component there is comprehensive support and monitoring available. It’s much easier to deploy these solutions than it is to monitor, maintain and support them over several years.

With Nimble we found the ideal combination of top-class hardware combined with excellent support, monitoring and maintenance.