Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Oracle multi-tenant and library cache isolation

This post is the result of a question that I got after presenting a session about Oracle database mutexes organised by ITOUG, as a response to the conference cancellations because of COVID-19. Thank Gianni Ceresa for asking me!

The library cache provides shared cursors and execution plans. Because they are shared, sessions can take advantage of the work of previous sessions of creating these. However, by having these shared, access needs to be regulated not to have sessions overwrite each other’s work. This is done by mutexes.

The question I got was (this is a paraphrased from my memory): ‘when using pluggable databases, could a session in one pluggable database influence performance of a session in another pluggable database’?

The answer I gave was that I didn’t test a multi-tenant scenario, but because the library cache is shared between the pluggable databases, it should be possible for a session in one pluggable database to block another session in another pluggable database.

So let’s test it!

I used an Oracle version 20.2 database, which automatically gives you a pluggable database. In fact, even when you don’t specify you want a multi-tenant database, it will create one. This is as expected and documented.

I created an instance called o202, and two pluggable databases, PDB1 and PDB2. Yes, I am a creative genius.

I logged in to PDB1 and executed a ‘select 8 from dual’.

SQL> show con_id

CON_ID
------------------------------
3

SQL> select 8 from dual;

         8
----------
         8

Now using another session in the root container, I dumped the library cache at level 16 to see how that looks like for the ‘select 8 from dual’ cursor:

SQL> show con_id

CON_ID
------------------------------
1

SQL> alter session set events 'immediate trace name library_cache level 16';

Session altered.

(don’t do this on a live environment!)
These are snippets of certain parts of the dump:

Bucket: #=100089 Mutex=0x71f7f8d8(627065225216, 6, 0, 6)
  LibraryHandle:  Address=0x67bcca90 Hash=372d86f9 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD
    ObjectName:  Name=select 8 from dual
      FullHashValue=f18bf11763dd069341c61ce6372d86f9 Namespace=SQL AREA(00) Type=CURSOR(00) ContainerId=1 ContainerUid=1 Identifier=925730553 OwnerIdn=9
...
    Concurrency:  DependencyMutex=0x67bccb40(0, 1, 0, 0) Mutex=0x67bccbe0(146, 22, 0, 6)
...
        Child:  childNum='0'
          LibraryHandle:  Address=0x6e6b3fd8 Hash=0 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD
            Name:  Namespace=SQL AREA(00) Type=CURSOR(00) ContainerId=3
...
            Concurrency:  DependencyMutex=0x6e6b4088(0, 0, 0, 0) Mutex=0x67bccbe0(146, 22, 0, 6)

This is bucket 100089, with its mutex. The counter for this mutex is the second argument, which is 6.
The library cache handle on the next line is the parent handle. The sql text and the hash value sit with the parent handle.
I am not sure how to interpret the “ContainerId” here, it says 1 (indicating the root container), while this SQL is only executed in container 3, which is shown above.
After the first ‘…’, the two mutexes in the parent handle can be seen: the dependency mutex, and the parent handle mutex.
After the second ‘…’, child number 0 is visible. Here the container is specified from which this child was actually executed.
After the third ‘…’, the concurrency information for the child is shown. The important bit is the dependency mutex is independent/unique, whilst the regular mutex is the same/shared from the parent.

Now what would happen if I execute the exact same SQL (select 8 from dual) in another container; PDB2 alias container id 4?

SQL> show con_id

CON_ID
------------------------------
4

SQL> select 8 from dual;

         8
----------
         8

And look at bucket 100089 again (using the dump of the library cache at level 16):

Bucket: #=100089 Mutex=0x73f81e58(8589934592, 4, 0, 6)
  LibraryHandle:  Address=0x656e2638 Hash=372d86f9 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD
    ObjectName:  Name=select 8 from dual
      FullHashValue=f18bf11763dd069341c61ce6372d86f9 Namespace=SQL AREA(00) Type=CURSOR(00) ContainerId=1 ContainerUid=1 Identifier=925730553 OwnerIdn=9
...
    Concurrency:  DependencyMutex=0x656e26e8(0, 2, 0, 0) Mutex=0x656e2788(2, 31, 0, 6)
...
        Child:  childNum='0'
          LibraryHandle:  Address=0x656e0ed8 Hash=0 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD
            Name:  Namespace=SQL AREA(00) Type=CURSOR(00) ContainerId=3
...
            Concurrency:  DependencyMutex=0x656e0f88(0, 0, 0, 0) Mutex=0x656e2788(2, 31, 0, 6)
...
        Child:  childNum='1'
          LibraryHandle:  Address=0x655fba58 Hash=0 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD
            Name:  Namespace=SQL AREA(00) Type=CURSOR(00) ContainerId=4
...
            Concurrency:  DependencyMutex=0x655fbb08(0, 0, 0, 0) Mutex=0x656e2788(2, 31, 0, 6)

Executing an identical SQL statement (which is a cursor object in the library cache) in another PDB means the same library cache hash bucket is used, and container id is one of the sharing criteria for a child. Because the statement in both cases has a different container id, the cursors can not be shared. This is very logical if you think about it: both containers essentially are logically isolated databases, and therefore there is no way to tell if the tables, access rights, data, really anything is the same, so the SQL is parsed again for the other container.

This has some implications. Because of identical SQL, it hashes to the same value, which mean both containers need access to the same bucket to find the pointer to the parent handle. Both containers also need access to the same parent and the same heap 0 to obtain the child list to find their child. Both the bucket and the handle have a single mutex that serializes access.

Access to the parent heap 0 is done shared, and remains shared if a compatible child can be found.

However, as soon as a new child needs to be created for the same SQL, it will create the new child handle pinned in exclusive mode and insert it into the hash table/child list in the parent’s heap 0. The new child is pinned in exclusive mode, because it still needs to allocate the child heaps and create the child information. This new child is the newest entry, and therefore is the first one any session would find if it scans for a compatible child.

At this point multi-tenancy makes a difference:
– If a session in the same container starts parsing and follows the procedure of hashing the SQL text, obtaining the bucket, finding the parent handle and then scan the child list for a compatible child, it will find the child in exclusive mode for this container, and waits for the child creation to finish, waiting on the event: ‘cursor: pin S wait on X’.
– If a session in ANOTHER container starts parsing and follows the procedure of hashing the SQL text, obtaining the bucket, finding the parent handle and then scan the child list for a compatible child, it will find the child in exclusive mode, but because it has a different container id, it will skip the child, and continue to scan the child list for compatible children. The scanning means pinning each child that potentially can be compatible, which means has the same container id, in shared mode, and by doing that either finds a compatible child, or if it doesn’t, create one itself.

So what does that mean; the conclusion:
The library cache hash table, the parent cursor and the child list/hash table are shared between all containers. These require unique (library cache hash table/handle) or shared (child list/hash table) access for each database process for usage, but the actual time it is held in these modes is so short that it is negligible.
The main reason for waiting for a cursor/library cache entry is a child pinned in exclusive mode during creation (‘cursor: pin S wait on X’).
When an identical statement is executed in another container, it will skip a child that is created or in the process of being created in another container, even when it’s pinned in exclusive mode.

This means that the multi-tenant option in my opinion does not significantly increase the risk of waiting because of library cache concurrency, specifically because it can skip child cursors pinned exclusively in another container.

Update april 4: I tested having a child from one container being pinned in X mode and running the same SQL in another container in 12.1.0.2.0 (base release of 12.1.0.2) and it works exactly the same as described above. So whilst the library cache hash table and parent handles are shared, the cursor children are specific to a container and do not lock each other out.

Friday Philosophy – Concentrating and Keeping Calm.

I was talking with a friend this week (via a webcam of course) about how he had been looking & looking at some misbehaving code for days. His team mates had looked too. It was not working and logically it should work. None of them could work it out. The problem turned out to be a small but obvious mistake.

This of course happens to us all occasionally, but we both agreed that, at the moment, we have the attention spans of a goldfish and are as easily distracted as a dog in squirrel country. I asked around a few other friends and it seems pretty much universal. All of us are making cups of tea and then taking the milk into the lounge & putting the cup of tea in the fridge. Or walking into the kitchen and asking who got the bread out to make lunch. It was you. The cat is wondering why I open the pouch of cat food and then leave it on the worktop and go do my email for 20 minutes. She’s getting annoyed.

Why are we all failing to function? Because we are all worried. This is one of the things anxiety does to us.

The whole COVID-19 thing is stressful – the feeling of being trapped inside, concern for friends and family, the ever growing numbers of infected & dying. I actually think if you are not at all worried then you are either:

  • Not understanding the situation
  • In denial
  • A total sociopath
  • Someone who should not be allowed out alone
  • Have reached a level of Zen calm usually only attainable by old oriental masters/mistresses

I’m by my nature often in camp 3 above, but even I am worried about this and I know it is making me tetchy and less able to focus. I’m struggling to keep my mind on things. Except on COVID-19. I tend to handle things I find unnerving by studying them and I probably spend about 3 or 4 hours a day looking at the latest information and scientific output on COVID-19. However, I note more things to “look at later” than I actually look at, as I am trying to manage my stress.

After an hour I make myself get up, go trim some roses, play a computer game, read a book. Anything to distract me. I’ve even started talking to the other person in the house and my wife is finding that particularly annoying. Sue seems easily annoyed and quite distracted at the moment. I wonder why?

Another way I cope is I talk with people about topics that are causing me stress. If I can’t talk, I write. Thus I wrote this Friday Philosophy – think of yourself as my counsellor.

I’ve seen a lot of social media “memes” about how long ago the 1st of March feels like, when we first started worrying about this. It seems like months ago, yes? To me it seems like a year. I started worrying about this a good while before the 1st March. I think the worry started about early/mid-February. Why? Because I’m a genius of course. {Note, this is called British self-deprecating sarcasm – I’m not a genius!}. No, the reason I picked up on all of this early was that chance primed me to.

I have a background in biology and some of the job roles I have held over my career have been in healthcare and the biological sciences. One role last year was working with a small biotech company working on immunology. So I take an interest in this sort of thing, it’s “my bag”. I was also pretty ill in December with Influenza (and yes, it WAS influenza, type A – I am not “the first case of COVID-19 in the UK”). So I was convalescing at home and took a specific interest in a new illness spreading through China that was influenza-like… And was worrying the hell out of the Chinese authorities who were coming down on it in a way we have not seen before, even with SARS and MERS.

I have to confess, I initially suspected (wrongly, I hasten to add) that this new disease had escaped from a lab. The way it spread, that it seemed to be ‘flu-like, the rapid response by the authorities. I don’t doubt research into modifying diseases goes on – by the UK, China, USA, the Vatican, by every country with a biotech industry. I know we have the tools to directly mess with genomes, I did it myself, crudely, 30 years ago and I know people now who do it now, with considerable accuracy, for medical and other altruistic reasons. However, genetically engineering an organism leaves traces and when COVID-19 was sequenced there was no sign of this and it could be tracked to similar, previously known samples. I might even know some of the people who sequenced it and checked. But, anyway, that suspicion also made me watch.

The rate of spread in Wuhan was as shocking as the authority’s response and then through February the scientific analyses started appearing. The R(0) number (infection rate) and the high case fatality rate were both high. I’m not an epidemiologist but I had been taught the basics of it and I knew what was coming. No, that’s not right, I suspected what was coming, and I was worried. It was when the number of countries with cases started to increase that I felt I knew what was coming. By the end of February I was sure that unless something huge happened to change it, 2-3% of people, everywhere, would be killed. This was going to be like Spanish ‘flu only quicker (as we all travel so much). I became “The Voice Of Doom”.

On 2nd March I recommended to our CEO that UKOUG cancelled our Ireland event (people & organisations were pulling out so it was making it financially untenable anyway, but my major concern was that this was going to explode in the population). Thankfully the rest of the board agreed. I created my tracking spreadsheet about the 5th March. So far it’s been depressingly good at predicting where we are about a week in advance, and not bad for 10 days. I leave it to the experts for anything beyond that. All so depressing so far.

But Something Huge has happened. Governments did take it seriously. Well, most of them. And those who took it seriously soonest and hardest have fared best. The social lock-downs and preparation work that is going on in the UK is going to reduce the impact down dramatically and, more importantly, give us time to try and find solutions. But it still worries me. And I think they could have done it sooner. But most of the world is taking this very seriously – as it is very serious.

Part of me wants to keep watching how COVID-19 develops, and maybe writing more articles on it. I’ve had some really nice feedback on the first two and I want to do a post on where we might go in the coming months and why. But part of me wants to stop as it is making me very anxious and I’m sick of losing my cups of tea, or being stared at hard by the cat, and the wife asking me what the hell am I doing with the spanner and tin of peas.

I can’t easily listen to the government announcements each day as it is obvious, if you look at the scientific data and what medical professionals are saying, that they are simply not being candid. It’s all “we can beat this in the next few weeks” and “we will get you testing kits this month that are utterly reliable” despite the fact that’s going to need a scientific miracle to do that, let alone develop a reliable vaccine. I understand we need to keep positive but I think bullshitting the population now is only going to make telling them anything they will believe in 2 months even harder. In 6 months time when there is still no reliable vaccine and so many people have been wrongly diagnosed and the first few countries have had this rip through them almost uncontrolled, the lack of candid honesty will come back to roost. I worry about that a lot.

So I’m worried and I’m worried I’m going to be worried for months and months and months.

But for now I’m going to go for my daily (local) walk along a path I know will be almost empty of people and relax.

 

* Note, the graph and the spreadsheet are just “decoration”. They are my wild guesses on what may happen and have no reliability at all. Just saying

 

 

 

Creating a new disk group for use with ASM Filter Driver on the command line in Oracle 19c

In my previous post I shared my surprise when I learned that calling gridSetup.sh 19c for use with Oracle ASM Filter Driver (ASMFD) required me to specify the names of the native block devices. This is definitely different from installing ASM with ASMLib where you pass ASM disks as “ORCL:diskname” to the installer.

Um, that’s great, but why did I write this post? Well, once the installation/configuration steps are completed you most likely need to create at least a second disk group. In my case that’s going to be RECO, for use with the Fast Recovery Area (FRA). This post details the necessary steps to get there, as they are different compared to the initial call to gridSetup.sh.

And while I might sound like a broken record, I would like to remind you that I’m not endorsing ASM Filter Driver. I merely found the documentation unclear in some places, and this post hopes to clarify certain aspects around the use of ASMFD. Pleae remember that ASMFD is new-ish technology and it’s up to every user to apply industry best known methods to ensure everything works as expected.

My environment

The lab environment hasn’t changed, I’m still using the same Oracle Linux 7.7 KVM VM I prepared for use with the last post. Storage is still made accessible via the virtio driver. The VM boots into the Red Hat Kernel.

Previously I installed the base release, Oracle Restart 19.3.0. Since the base release has been made available, quite a few issues have been addressed in later Release Updates (RU). To keep up with the latest fixes my system has since been patched to 19.6.0. Oracle 19.6.0 was the current RU at the time of writing.

Creating a new disk group

Since I’m using Ansible for most things these days I had to come up with a straight-forward method of creating a disk group. ASM has shipped with ASM Configuration Assistant (asmca) for quite a while now, and it can be used to create a disk group in a simple, elegant call (link to documentation). I could of course have created the disk group in sqlplus but this would have required a lot more typing, and I’m inherently lazy.

Unlike the initial call to gridSetup.sh where you pass native block devices along with a request to configure ASMFD, the steps for creating the disk group require you to label the disks beforehand. This is pretty trivial, and more importantly, easy to automate with Ansible.

Labeling the disks

As per my earlier post, I’m planning on using /dev/vde1 and /dev/vdf1 for RECO. The first step is to label the disks. The call is similar to the one you read about earlier:

[root@server4 ~]# . oraenv
ORACLE_SID = [+ASM] ? +ASM
The Oracle base remains unchanged with value /u01/app/grid
[root@server4 ~]# asmcmd afd_label RECO1 /dev/vde1
[root@server4 ~]# asmcmd afd_label RECO2 /dev/vdf1
[root@server4 ~]# asmcmd afd_lslbl
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DATA1                                 /dev/vdc1
DATA2                                 /dev/vdd1
RECO1                                 /dev/vde1
RECO2                                 /dev/vdf1
[root@server4 ~]#  

Note the absence of the “–init” flag when invoking asmcmd afd_label … The way I understand it, this flag is used only during the initial installation.

Creating the disk group

Once the disks are labeled, you can create the disk group. Using the documentation reference I shared earlier I ended up with this call to asmca:

[grid@server4 ~]$ asmca -silent \
> -createDiskGroup -diskGroupName RECO \
> -disk 'AFD:RECO*' -redundancy EXTERNAL \
> -au_size 4 -compatible.asm 19.0.0 -compatible.rdbms 19.0.0

[DBT-30001] Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-200402AM115509.log for details.

[grid@server4 ~]$  

Thanks to ASMFD I don’t have to specify individual disks, I can simply tell it to use all disks that go by the name of RECO* – RECO1 and RECO2 in this example. The actual number of ASM disks doesn’t matter using this call, again helping me automate the process.

This this environment is exclusively used for Oracle 19c I can safely set the compatibility to 19c both for ASM as well as the database. Refer to the ASM documentation for further information about the disk group compatibility properties.

Verification

The output of the command indicates success, so let’s have a look at the ASM configuration:

SQL> select d.name, d.path, d.library, dg.name
  2  from v$asm_disk d left join v$asm_diskgroup dg on (dg.group_number = d.group_number)
  3  where dg.name = 'RECO';

NAME       PATH            LIBRARY                                                      NAME
---------- --------------- ------------------------------------------------------------ ----------
RECO1      AFD:RECO1       AFD Library - Generic , version 3 (KABI_V3)                  RECO
RECO2      AFD:RECO2       AFD Library - Generic , version 3 (KABI_V3)                  RECO

SQL> select d.name, dg.name, dg.compatibility, dg.database_compatibility
  2  from v$asm_disk d left join v$asm_diskgroup dg on (dg.group_number = d.group_number)
  3  where dg.name = 'RECO';

NAME       NAME       COMPATIBILITY        DATABASE_COMPATIBILI
---------- ---------- -------------------- --------------------
RECO1      RECO       19.0.0.0.0           19.0.0.0.0
RECO2      RECO       19.0.0.0.0           19.0.0.0.0 

This seems to have worked. I can also see the disk groups registered in Clusterware:

[grid@server4 ~]$ crsctl stat res -t -w "TYPE == ora.diskgroup.type"
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       server4                  STABLE
ora.RECO.dg
               ONLINE  ONLINE       server4                  STABLE
--------------------------------------------------------------------------------
[grid@server4 ~]$ 

Make Your Business More Accessible with New Blocks

From our support sessions with customers each month, we know that growing your brand or business is a top website goal. And in this unprecedented time in which more people around the world are staying at home, it’s important to promote your products and services online to reach a wider audience and connect with more people.

Our team has been hard at work improving the block editor experience. We’ve launched six new blocks that integrate WordPress.com and Jetpack-enabled sites with popular services — Eventbrite, Calendly, Pinterest, Mapbox, Google Calendar, and OpenTable — enabling you to embed rich content and provide booking and scheduling options right on your blog or website.

Whether you’re an online boutique, a pilates studio, an independent consultant, or a local restaurant, these blocks offer you more ways to promote your brand or business. Take a look at each block — or simply jump to a specific one below.


Promote online events with the Eventbrite block

Looking for a way to promote an online event (like your museum’s virtual curator talk or your company’s webinar on remote work), or even an at-home livestream performance for your fans and followers? Offering key features of the popular event registration platform, the Eventbrite block embeds events on posts and pages so your visitors can register and purchase tickets right from your site.

https://en-blog.files.wordpress.com/2020/03/eventbrite-block-image.png?w... 150w, https://en-blog.files.wordpress.com/2020/03/eventbrite-block-image.png?w... 300w, https://en-blog.files.wordpress.com/2020/03/eventbrite-block-image.png?w... 768w, https://en-blog.files.wordpress.com/2020/03/eventbrite-block-image.png 1112w" sizes="(max-width: 1024px) 100vw, 1024px" />

Quick-start guide:

  • To use this block, you need an Eventbrite account. If you don’t have one, sign up at Eventbrite for free.
  • In the block editor, click the Add Block (+) button and search for and select the Eventbrite Checkout block.
  • Enter the URL of your Eventbrite event. Read these steps from Eventbrite if you need help.
  • Select from two options: an In-page Embed shows the event details and registration options directly on your site. The Button & Modal option shows just a button; when clicked, the event details will pop up so your visitor can register.

Learn more on the Eventbrite block support page.


Schedule sessions with the Calendly block

Want to make it easier for people to book private meditation sessions or language lessons with you? The Calendly block, featured recently in our guide on moving your classes online, is a handy way for your clients and students to book a session directly on your site — eliminating the time spent coordinating schedules. You can also use the Calendly block to schedule team meetings or group events.

https://en-blog.files.wordpress.com/2020/03/calendly-image.png?w=150 150w, https://en-blog.files.wordpress.com/2020/03/calendly-image.png?w=300 300w, https://en-blog.files.wordpress.com/2020/03/calendly-image.png?w=768 768w, https://en-blog.files.wordpress.com/2020/03/calendly-image.png 1094w" sizes="(max-width: 1024px) 100vw, 1024px" />

Quick-start guide:

  • To use this block, you need a Calendly account. Create one for free at Calendly.
  • In the block editor, click the Add Block (+) button and search for and select the Calendly block.
  • Enter your Calendly web address or embed code. Follow these steps from Calendly if you need help.
  • Select from two styles: the Inline style embeds a calendar directly onto your site; the Link style inserts a button that a visitor can click to open a pop-up calendar.
  • This block is currently available to sites on the WordPress.com Premium, Business, or eCommerce plans. It’s free on Jetpack sites.

Learn more on the Calendly block support page.


Up your visual game with the Pinterest block

Strong visuals help to provide inspiration, tell your stories, and sell your products and services. Pinterest is an engaging way for bloggers, influencers, and small business owners to enhance their site content and expand their following. With the Pinterest block, you can embed and share pins, boards, and profiles on your site.

https://en-blog.files.wordpress.com/2020/03/pinterest-block.png?w=150 150w, https://en-blog.files.wordpress.com/2020/03/pinterest-block.png?w=300 300w, https://en-blog.files.wordpress.com/2020/03/pinterest-block.png?w=768 768w, https://en-blog.files.wordpress.com/2020/03/pinterest-block.png 1680w" sizes="(max-width: 1024px) 100vw, 1024px" />

Quick-start guide:

  • In the block editor, click the Add Block (+) button and search for and select the Pinterest block.
  • Paste the URL of a pin, board, or profile you’d like to display and click Embed. Note that you can only embed public boards.
  • Pro tip: in the block editor, go to Layout Elements and select Layout Grid to create a visually striking layout with pins, boards, and profiles, as shown above.

Display locations with the Map block

A map on your site is a quick visual way to display a location, like your restaurant’s takeout window or the drop-off spot for donations to a local food bank. Powered by mapping platform Mapbox, the Map block embeds a customized map on your site. Show the location of your business, a chain of boutique hotels, the meeting spots for your nonprofit’s volunteers, and more.

https://en-blog.files.wordpress.com/2020/03/map.png?w=150 150w, https://en-blog.files.wordpress.com/2020/03/map.png?w=300 300w, https://en-blog.files.wordpress.com/2020/03/map.png?w=768 768w, https://en-blog.files.wordpress.com/2020/03/map.png 1530w" sizes="(max-width: 1024px) 100vw, 1024px" />

Quick-start guide:

  • In the block editor, click the Add Block (+) button and search for and select the Map block.
  • In the text field, type the location you want to display and select the correct location from among the results that appear.
  • Click on the red marker to edit the title and caption of the marker.
  • Explore the toolbar for block-specific settings. Add more markers, for example, by clicking the Add a marker button.
  • In the sidebar, customize your map’s appearance (including colors, height, and zoom level).

Explore more settings on the Map block support page.


Share your calendar with the Google Calendar block

Are you an author planning a book tour (or a series of online readings)? A digital marketing consultant hosting social media workshops? A neighborhood pop-up bakery? With the Google Calendar block, you can display a calendar of upcoming events or your hours of operation.

https://en-blog.files.wordpress.com/2020/03/google-calendar.png?w=150 150w, https://en-blog.files.wordpress.com/2020/03/google-calendar.png?w=300 300w, https://en-blog.files.wordpress.com/2020/03/google-calendar.png?w=768 768w, https://en-blog.files.wordpress.com/2020/03/google-calendar.png 1652w" sizes="(max-width: 1024px) 100vw, 1024px" />

Quick-start guide:

  • In Google Calendar, click the three dots next to your calendar name and select Settings and sharing.
  • Under Access Permissions, ensure Make available to public is checked.
  • Click on Integrate calendar on the left and copy the code under Embed code.
  • In the block editor, click the Add Block (+) button, search for and select the Custom HTML block, and paste the code you copied in Google Calendar.
  • Publish your post or page. The next time you edit this post or page, you’ll see the code has been converted to shortcode.

Explore more settings on the Google Calendar block support page.


Streamline reservations with the OpenTable block

If you’re a restaurant or cafe owner, a primary goal of your site is to increase the number of bookings. Sure, people aren’t dining out right now, but you can be ready to take reservations in the future. With the OpenTable block, people can reserve a table directly from a post or page instead of calling or booking through a different reservation service.

https://en-blog.files.wordpress.com/2020/03/standard.png?w=138 138w, https://en-blog.files.wordpress.com/2020/03/standard.png?w=275 275w" sizes="(max-width: 390px) 100vw, 390px" />

Quick-start guide:

  • To use this block, your restaurant must be listed on OpenTable. Create an OpenTable listing now.
  • In the block editor, click the Add Block (+) button and search for and select the OpenTable block.
  • Enter your OpenTable Reservation Widget embed code. Check this OpenTable guide if you need help.
  • Explore the block’s toolbar and sidebar settings. For example, choose from four different embed styles: Standard, Tall, Wide, and Button.
  • This block is currently available to sites on the WordPress.com Premium, Business, or eCommerce plans. It’s free on Jetpack sites.

Learn more on the OpenTable block support page.


Which blocks are you most excited about?

Stay tuned for more new blocks soon!

Active Data Guard – limitations on ROWTYPE

I had an AskTOM question come in with an issue trying to PL/SQL on an Active Data Guard (ADG) database (which of course is running in read-only mode). The PL/SQL block seems innocuous; it does not DML and yet refuses to run:



SQL> DECLARE
  2    tst_row  dual%ROWTYPE;
  3  BEGIN
  4    null;
  5  END;
  6  /
DECLARE
*
ERROR at line 1:
ORA-16000: database or pluggable database open for read-only access

I did a trace on the code, and made an interesting discovery, and hence the cause of the error. When we reference a ROWTYPE definition, we need a mechanism to refer to and compile against that definition of that type. Hence we temporarily “create” a type definition to handle that. Tracing the above anonymous block, you’ll see the following entries in the trace file



********************************************************************************

insert into "SYS"."KOTTD$" (SYS_NC_OID$, SYS_NC_ROWINFO$) values (:1, :2 )


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          3          0           0
Execute      1      0.00       0.00          0         67          4           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0         70          4           1

********************************************************************************

insert into "SYS"."KOTTB$" (SYS_NC_OID$, SYS_NC_ROWINFO$) values (:1, :2 )


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          3          0           0
Execute      1      0.00       0.00          0         67          3           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0         70          3           1

********************************************************************************

insert into "SYS"."KOTAD$" (SYS_NC_OID$, SYS_NC_ROWINFO$)  values (:1, :2 )


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          3          0           0
Execute      1      0.00       0.00          0         97         24           8
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0        100         24           8

********************************************************************************

insert into "SYS"."KOTTBX$" (SYS_NC_OID$, SYS_NC_ROWINFO$)  values (:1, :2 )


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          3          0           0
Execute      1      0.00       0.00          0         67          3           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0         70          3           1

In a read-only environment, that is a problem, because we cannot perform DML, even against the data dictionary. However, if we make this scenario a little more realistic in terms of why we would want to run a PL/SQL block against an ADG environment, it is probably to perform some more “robust” data processing. In this instance, you can use some of the ADG extensions to redirect that operation back to the primary



SQL> alter session enable adg_redirect_plsql;

Session altered.

SQL> DECLARE
  2    tst_row  dual%ROWTYPE;
  3  BEGIN
  4    null;
  5  END;
  6  /

PL/SQL procedure successfully completed.

If you are interested in more of the capacities of ADG, check out this great presentation from OpenWorld last year.

Big thanks to DataGuard PM Pieter Van Puymbroeck for his assistance with this post.

PL/SQL Machine Code Trace - event 10928

I have had an interest in PL/SQL for more around 25 years. I have always liked this great language as its powerful and simple and a great tool for writing code in the database. I wrote my very first PL/SQL....[Read More]

Posted by Pete On 02/04/20 At 01:33 PM

The Oracle ACE Program : My 14 Year Anniversary

I was checking my calendar, thinking I was about to jack it in for the day, and I noticed it’s April 1st, which means it’s my 14th year anniversary of being an Oracle ACE. Can’t believe I nearly missed that!

As usual I’ll mention some of the other anniversaries that will happen throughout this year.

  • 25 years working with Oracle technology in August. (August 1995)
  • 20 years doing my website in July. (Original name: 03 July 2000 or current name: 31 August 2001)
  • 15 years blogging in June. (15 June 2005)
  • 14 years on the Oracle ACE Program. (01 April 2006)
  • A combined 3 years as an Oracle Developer Champion, now renamed to Oracle Groundbreaker Ambassador. (21 June 2017)

The 20 years of doing the website in July will be a pretty big one. I might have to do something for that. </p />
</p></div>

    	  	<div class=

Be Careful of What You Include In SQL*Net Security Banners

A short post today to add a little to the post I made the other day. In that post Add A SQL*Net Security Banner And Audit Notice I talked about using the sqlnet.ora parameters SEC_USER_AUDIT_ACTION_BANNER and SEC_USER_UNAUTHORIZED_ACCESS_BANNER to add security....[Read More]

Posted by Pete On 01/04/20 At 11:50 AM

World Backup Day- Backing up an Oracle Database using RMAN to Azure Blob Storage

A DBA is only as good as their last backup…or more so, their last recovery.

To celebrate #WorldBackupDay on Twitter, I’m blogging on how to backup an Oracle database directly to Azure Blob storage.  Yes, you could backup on a managed disk directly connected to the VM, then copy it off, but Azure Blob Storage is inexpensive and provides considerable speed and opportunity to create an NSF mount to use the backups with other Oracle hosts for cloning, recovery, etc.

Configure Blob Storage in the Azure Portal

In this example, we have an Oracle database running on an Azure IaaS VM and need to backup the database with RMAN.  No RMAN repository catalog will be used, but yes, you can use one-  no problem.

If you don’t already have a database and want to play along, you can perform the following blog post to create one.

Create blob storage that you want to use with the database, you can do this in the portal, go to Storage Accounts and click on +Add:

https://dbakevlar.com/wp-content/uploads/2020/03/Blob_storage1-300x298.jpg 300w, https://dbakevlar.com/wp-content/uploads/2020/03/Blob_storage1-150x150.jpg 150w, https://dbakevlar.com/wp-content/uploads/2020/03/Blob_storage1-768x763.jpg 768w, https://dbakevlar.com/wp-content/uploads/2020/03/Blob_storage1.jpg 1206w" sizes="(max-width: 800px) 100vw, 800px" />

Click on Next and choose Public Endpoints and choose the defaults for the rest of the tabs and create.

Once created, go into the storage account and on the left blade, choose containers.  Add a container, choosing a name that is descriptive and choose blob and anonymous read access.  You’ll require this name later for your blobfuse creation.

https://dbakevlar.com/wp-content/uploads/2020/03/blob_storage2-300x34.jpg 300w, https://dbakevlar.com/wp-content/uploads/2020/03/blob_storage2-768x87.jpg 768w, https://dbakevlar.com/wp-content/uploads/2020/03/blob_storage2-1536x174.jpg 1536w, https://dbakevlar.com/wp-content/uploads/2020/03/blob_storage2-2048x232.jpg 2048w" sizes="(max-width: 800px) 100vw, 800px" />

Install BlobFuse on the VM Host

Once you have your Oracle VM and your storage set up, you need to get a clone for the Blob Fuse install.  Follow these instructions to complete this step from the VM Host as Root.

For our new VM, Oracle Linux requires the same installation as RHEL:

sudo rpm -Uvh https://packages.microsoft.com/config/rhel/7/packages-microsoft-prod.rpm

Run the Install:

sudo yum install blobfuse

Create the directories and ensure the ownership is correct on the mount

sudo mkdir /mnt/blobfuse -p

mkdir /mnt/blobfuse/orabcntr1
sudo chown oracle /mnt/blobfuse/

You should now see your new mount point:

Next you need to create a configuration file that will contain the vital information for Blobfuse to log in.  You can find the values in the Azure Portal -> Storage Accounts -> click your storage account -> Access keys. The container name is the name of the virtual folder that you’re mounting. This container should already have been created either on in the Azure Portal or the Azure Storage Explorer):

touch ~/connection.cfg

chmod 600 ~/connection.cfg
vi ~/connection.cfg

The file will contain three lines in it and look similar to the following:

accountName orabkup2
accountKey 5RD7lzbNJOABSC3/V6E3hsU1aXSRlksjdf8762lks7lksh39JKSINE-
containerName orabcntr1
Save the file.

Mount the storage to be used:

blobfuse mnt/blobfuse/orabcntr1 --tmp-path=/mnt/ramdisk/blobfusetmp --config-file=/fuse_connection.cfg -o#333333; text-transform: none; text-indent: 0px; letter-spacing: normal; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; word-spacing: 0px; display: inline !important; white-space: normal; cursor: text; orphans: 2; float: none; -webkit-text-stroke-width: 0px; background-color: #ffffff;"> attr_timeout=240 -o entry_timeout=240 -o negative_timeout=120 --config-file=../connection.cfg --log-level=LOG_DEBUG --file-cache-timeout-in-seconds=120

You can now view the mount point on the host:

df -h
blobfuse         30G   17G   11G  61% /mnt/blobfuse/orabcntr1

Test that you can write to the blob storage as Oracle:

su - oracle
cd /mnt/blobfuse/orabcntr1
mkdir test
echo "hello world" > test/blob.txt

RMAN Backup with Blob Storage

First step is to install Blob Fuse on the Linux VM to be used in conjunction with the blog storage by RMAN:

export ORACLE_HOME=/u01/app/oracle/product/18.0.0/dbhome_1
export ORACLE_SID=cdb1
RMAN
connect target /
#000000; text-transform: none; line-height: 1.615; text-indent: 0px; letter-spacing: normal; font-family: courier new,courier; font-size: 13px; font-style: normal; font-variant: normal; text-decoration: none; word-spacing: 0px; display: inline !important; white-space: pre; orphans: 2; float: none; -webkit-text-stroke-width: 0px; overflow-wrap: normal; background-color: #f9f9fb;">
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/mnt/blobfuse/orabcntr1/snapcf_ev.f';
CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT '*' FORMAT   '/mnt/blobfuse/orabcntr1/DB_NAME/%U';
new RMAN configuration parameters are successfully stored
CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT '*' FORMAT   '/mnt/blobfuse/orabcntr1/DB_NAME/%U';
new RMAN configuration parameters are successfully stored

If you do a the following:

SHOW ALL;

The following will be seen as part of the RMAN configuration:

CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT '*' FORMAT   '/mnt/blobfuse/orabcntr1/DB_NAME/%U';
CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT '*' FORMAT   '/mnt/blobfuse/orabcntr1/DB_NAME/%U';
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/mnt/blobfuse/orabcntr1/snapcf_ev.f';

Now run the backup:

#444444; text-transform: none; text-indent: 0px; letter-spacing: normal; text-decoration: none; word-spacing: 0px; display: inline !important; white-space: normal; orphans: 2; float: none; -webkit-text-stroke-width: 0px; background-color: #ffffff;">run
#444444; text-transform: none; text-indent: 0px; letter-spacing: normal; text-decoration: none; word-spacing: 0px; display: inline !important; white-space: normal; orphans: 2; float: none; -webkit-text-stroke-width: 0px; background-color: #ffffff;"> {
#444444; text-transform: none; text-indent: 0px; letter-spacing: normal; text-decoration: none; word-spacing: 0px; display: inline !important; white-space: normal; orphans: 2; float: none; -webkit-text-stroke-width: 0px; background-color: #ffffff;"> BACKUP AS COMPRESSED BACKUPSET ARCHIVELOG ALL NOT BACKED UP FILESPERSET 10 DELETE ALL INPUT TAG ‘ARCH’;
#444444; text-transform: none; text-indent: 0px; letter-spacing: normal; text-decoration: none; word-spacing: 0px; display: inline !important; white-space: normal; orphans: 2; float: none; -webkit-text-stroke-width: 0px; background-color: #ffffff;"> }

https://dbakevlar.com/wp-content/uploads/2020/03/blob_storage3-300x27.jpg 300w, https://dbakevlar.com/wp-content/uploads/2020/03/blob_storage3-768x68.jpg 768w, https://dbakevlar.com/wp-content/uploads/2020/03/blob_storage3.jpg 1080w" sizes="(max-width: 800px) 100vw, 800px" />

Once the backup has succeeded, you can then schedule backups of both the database + archive logs and regular archive logs to the blob storage in CRON.

 

 

 

 



Tags:  , , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright ©  [World Backup Day- Backing up an Oracle Database using RMAN to Azure Blob Storage], All Right Reserved. 2020.

Create a Simple Oracle VM on Azure IaaS

Use the following shell script to create your Oracle VM.  I chose the following parameters to create mine:

Using Azure Cloud Shell and with persistent storage, which are linked on the github page, I uploaded the mk_oravm.sh script and run it after changing the permissions.

chmod 744 mk_oravm.sh

Provision the VM

Run the script:

 ./mk_oravm.sh

Anwser the questions from the script:

What is the name for the resource group to create the deployment in? Example: ORA_GRP
Enter your Resource Group name:
orabkup_grp

Here's the installation version urns available, including Oracle and Oracle Linux
Urn
-----------------------------------------------------------
Oracle:Oracle-Database-Ee:18.3.0.0:18.3.20181213

Enter the urn you'd like to install, feel free to copy from the list and paste here:
Oracle:Oracle-Database-Ee:18.3.0.0:18.3.20181213

What unique name your Oracle database server? This will be used for disk naming, must be unique.  Example: ora122db1
Enter the DB Server name:
ora183db1

What size deployment would you like, choose from the following: StandardSSD_LRS, Standard_LRS, UltraSSD_LRS ?  Example:
Enter the Size name from above:
StandardSSD_LRS

Choose an Admin user to manage your server.  This is not the ORACLE user for the box, but an ADMIN user
Enter in the admin user name, example: azureuser
azureuser

You must choose a location region to deploy your resources to.  The list as follows:
centralus
eastus
eastus2
westus
northcentralus
southcentralus
westcentralus
westus2
Enter the zone from the list above:
westus2

The script will then deploy your VM in Azure:

{
  "id": "/subscriptions/73aa270e-fffd-411a-b368-b44263f61deb/resourceGroups/orabkup_grp",
  "location": "westus2",

.......

      "provisioningState": "Succeeded",
      "resourceGroup": "orabkup_grp",
      "sourceAddressPrefix": "*",
      "sourceAddressPrefixes": [],
      "sourceApplicationSecurityGroups": null,
      "sourcePortRange": "*",
      "sourcePortRanges": [],
      "type": "Microsoft.Network/networkSecurityGroups/securityRules"
    }
  ],
  "subnets": null,
  "tags": {},
  "type": "Microsoft.Network/networkSecurityGroups"
}
Deployment of Oracle VM ora183db1 in resource group orabkup_grp Complete
Keys generated for authentication
Admin name is azureuser

Setup Environment

#ffffff; color: #333333; cursor: text; font-family: Georgia,'Times New Roman','Bitstream Charter',Times,serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">Now, log into the new VM, set your Oracle home, start the listener and SID

Create your data directory.  For this example, I’m just going to add a second directory in the /u01, but you can add a disk and place it there:

sudo mkdir /u01/data
chown oracle:oinstall /u01/data

Switch over to Oracle and start the listener:

#ffffff; color: #333333; cursor: text; font-family: Georgia,'Times New Roman','Bitstream Charter',Times,serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">sudo su - oracle
export ORACLE_HOME= /u01/app/product/18.3.0.0/dbhome_1
lsnrctl start

Create your Oracle 18c Database:

dbca -silent \
       -createDatabase \
       -templateName General_Purpose.dbc \
       -gdbname cdb1 \
       -sid cdb1 \
       -responseFile NO_VALUE \
       -characterSet AL32UTF8 \
       -sysPassword OraPasswd1 \
       -systemPassword OraPasswd1 \
       -createAsContainerDatabase true \
       -numberOfPDBs 1 \
       -pdbName pdb1 \
       -pdbAdminPassword OraPasswd1 \
       -databaseType MULTIPURPOSE \
       -automaticMemoryManagement false \
       -storageType FS \
       -datafileDestination /u01/data \
       -ignorePreReqs

The output will look like the following:

Prepare for db operation
8% complete
Copying database files
31% complete
Creating and starting Oracle instance
32% complete
...
77% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/cdb1.
Database Information:
Global Database Name:cdb1
System Identifier(SID):cdb1
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdb1/cdb1.log" for further details.

Once the database creation is complete, then a few steps can be performed to set up the TNSNAMES, etc. or you can proceed to simple log in and work with this new database using a bequeath connection.

 



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright ©  [Create a Simple Oracle VM on Azure IaaS], All Right Reserved. 2020.