Search

Top 60 Oracle Blogs

Recent comments

#Exasol on #AWS: Elasticity with #Cloud UI

This is the second part of the mini-series Exasol on AWS. Here’s the first part.

Cloud UI is an extension to EXAoperation that makes it easy for you to

  • Scale up & down
  • Increase storage capacity
  • Scale out by adding nodes to the cluster

Cloud UI can be reached by adding the port number 8835 to the URL of your License Server and uses the same credentials as EXAoperation.

Scale down to m5.large with Cloud UI

Depending on the load you get on your Exasol cluster, you can scale up your data nodes to more powerful EC2 instances if load is high and scale down to less expensive EC2 instances with lower user demands.

I started my little cluster with r5.large instances. Now I want to scale down to m5.large. Enter Cloud UI:

https://uhesse.files.wordpress.com/2019/12/ela01.png?w=1240&h=678 1240w, https://uhesse.files.wordpress.com/2019/12/ela01.png?w=150&h=82 150w, https://uhesse.files.wordpress.com/2019/12/ela01.png?w=300&h=164 300w, https://uhesse.files.wordpress.com/2019/12/ela01.png?w=768&h=419 768w, https://uhesse.files.wordpress.com/2019/12/ela01.png?w=1024&h=559 1024w" sizes="(max-width: 620px) 100vw, 620px" />

You see on the right site that scaling down to m5.large reduces both available memory and costs. I click on APPLY now and confirm the pop-up coming next with EXECUTE. The following steps the system goes through can be monitored in EXAoperation:

https://uhesse.files.wordpress.com/2019/12/ela02.png?w=1240&h=494 1240w, https://uhesse.files.wordpress.com/2019/12/ela02.png?w=150&h=60 150w, https://uhesse.files.wordpress.com/2019/12/ela02.png?w=300&h=119 300w, https://uhesse.files.wordpress.com/2019/12/ela02.png?w=768&h=305 768w, https://uhesse.files.wordpress.com/2019/12/ela02.png?w=1024&h=407 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Notice that the database got restarted during that process.

Scale out by adding data nodes

I want to expand my present 1+0 cluster to a 2+1 cluster. First I add another active node:

https://uhesse.files.wordpress.com/2019/12/ela03.png?w=1240&h=572 1240w, https://uhesse.files.wordpress.com/2019/12/ela03.png?w=150&h=69 150w, https://uhesse.files.wordpress.com/2019/12/ela03.png?w=300&h=138 300w, https://uhesse.files.wordpress.com/2019/12/ela03.png?w=768&h=354 768w, https://uhesse.files.wordpress.com/2019/12/ela03.png?w=1024&h=472 1024w" sizes="(max-width: 620px) 100vw, 620px" />

As you see, this doesn’t only increase the overall avaible memory but also the compute power. Storage capacity is usually also increased when adding a node. In this particular case not, though, because I will also go from redundancy 1 to redundancy 2.

The log looks like this now:

https://uhesse.files.wordpress.com/2019/12/ela04.png?w=1240&h=436 1240w, https://uhesse.files.wordpress.com/2019/12/ela04.png?w=150&h=53 150w, https://uhesse.files.wordpress.com/2019/12/ela04.png?w=300&h=105 300w, https://uhesse.files.wordpress.com/2019/12/ela04.png?w=768&h=270 768w, https://uhesse.files.wordpress.com/2019/12/ela04.png?w=1024&h=359 1024w" sizes="(max-width: 620px) 100vw, 620px" />

My one node cluster did use redundancy 1, now I want to change that to redundancy 2. That step is of course not required when you started with a multi-node cluster using redundancy 2 already. See here for more details about redundancy in Exasol.

To increase redundancy, I go to the EXAstorage page of EXAoperation:

https://uhesse.files.wordpress.com/2019/12/ela05.png?w=116 116w, https://uhesse.files.wordpress.com/2019/12/ela05.png?w=231 231w" sizes="(max-width: 475px) 100vw, 475px" />

The new EC2 instance for the new data node can be renamed like this:

https://uhesse.files.wordpress.com/2019/12/ela06.png?w=150&h=24 150w, https://uhesse.files.wordpress.com/2019/12/ela06.png?w=300&h=48 300w, https://uhesse.files.wordpress.com/2019/12/ela06.png?w=768&h=123 768w, https://uhesse.files.wordpress.com/2019/12/ela06.png?w=1024&h=164 1024w, https://uhesse.files.wordpress.com/2019/12/ela06.png 1049w" sizes="(max-width: 620px) 100vw, 620px" />

That makes it easier to identify the nodes, for example when associating elastic IPs to them. I do that now for n12 in the same way I did it with n11 before.

The elastic IPs of the data nodes must then be entered into the connection details of clients like DbVisualizer in this example:

https://uhesse.files.wordpress.com/2019/12/ela07.png?w=150 150w, https://uhesse.files.wordpress.com/2019/12/ela07.png?w=300 300w" sizes="(max-width: 540px) 100vw, 540px" />

After having added a new active node, that node is initially empty unless REORGANIZE operations are done. For example a REORGANIZE DATABASE:

https://uhesse.files.wordpress.com/2019/12/ela08.png?w=150&h=53 150w, https://uhesse.files.wordpress.com/2019/12/ela08.png?w=300&h=107 300w, https://uhesse.files.wordpress.com/2019/12/ela08.png?w=768&h=273 768w, https://uhesse.files.wordpress.com/2019/12/ela08.png?w=1024&h=364 1024w, https://uhesse.files.wordpress.com/2019/12/ela08.png 1192w" sizes="(max-width: 620px) 100vw, 620px" />

I have a 2+0 cluster now: Mirrored segments on two active nodes but no reserve node.

Adding reserve nodes

To get a 2+1 cluster, I need to add a reserve node. Again, that’s quite easy to do with Cloud UI:

https://uhesse.files.wordpress.com/2019/12/ela09.png?w=1240&h=518 1240w, https://uhesse.files.wordpress.com/2019/12/ela09.png?w=150&h=63 150w, https://uhesse.files.wordpress.com/2019/12/ela09.png?w=300&h=125 300w, https://uhesse.files.wordpress.com/2019/12/ela09.png?w=768&h=321 768w, https://uhesse.files.wordpress.com/2019/12/ela09.png?w=1024&h=427 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Within about 10 Minutes, the log should show something like this:

https://uhesse.files.wordpress.com/2019/12/ela10.png?w=150 150w, https://uhesse.files.wordpress.com/2019/12/ela10.png?w=300 300w" sizes="(max-width: 616px) 100vw, 616px" />

Notice that there was no database restart this time. The new node should get renamed and have a new elastic IP associated as shown before. Also that IP needs to be added to client connection details. See here if you wonder what reserve nodes are good for.

Now that I have got a 2+1 Exasol cluster running on AWS, I’m ready to demonstrate what happens if one node fails. That will be the next part of this series </p />
</p></div>

    	  	<div class=