Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Sourdough – Creating The “Starter”

Making and Baking A Sourdough Loaf –>>

A couple of people have asked me to describe how I create the Sourdough bread that I often tweet about baking. It’s too much for a Facebook post, and waaaay too much for a twitter thread, so I’m putting it here on my blog. This is part one – you need something called a “Sourdough Starter” to make sourdough bread, this is how I create my starter. Part two will describe making an actual loaf of sourdough.

https://mwidlake.files.wordpress.com/2020/12/small_load2.jpg?w=496 496w, https://mwidlake.files.wordpress.com/2020/12/small_load2.jpg?w=150 150w" sizes="(max-width: 498px) 100vw, 498px" />

Nothing much beats a sandwich made with home made sourdough

I know this is seriously off-topic for a blog that is supposed to mostly considers Oracle tech & performance, working in Oracle/I.T, and thoughts on IT management & how people work, but let’s face it – the more semi-retired I get the more this blog is becoming somewhere I simply share “stuff”. However, there is a bit of a link. Over the last few years baking bread has been taken up by a surprising number of people in the Oracle Presenting sphere (and this pre-dates the craze for making your own bread that came with Covid-19). One presenter, Jože Senegačnik, even wins national awards for his bread in Slovenia.

What is Sourdough?

Sourdough is a rustic type of bread, usually white, with a dark, thick crust and usually more flavour than a standard loaf of white bread. I know I am biased, but the sourdough bread I make is about the nicest bread I have ever eaten (with perhaps the exception of the bread of some of my other baking friends). It is certainly nicer than your average loaf and better than “normal” bread I have made at home.

Sourdough bread has an open texture (lots of holes), so it is quite light and, at the centre, soft. Sometimes the bread has large voids in it. If you buy sourdough in a shop or it is part of a meal in a café/restaurant (it’s almost always the bread used in posh cafes with your smashed avocado and free range egg for breakfast) it seems to me that the posher the place, the larger the voids. Sometimes a slice of sourdough toast can be more void than bread. It does not need the large voids and, in my opinion, they are detrimental to the bread. You can’t make a sandwich or put anything on the bread without the contents falling through the big holes! It’s fine with soup & stews I suppose, where you are dipping chunks in liquid.

Sourdough is a type of wheat-based bread where instead of using dried yeast or fresh yeast that comes in blocks that look like soft cheese, you use an active, growing “porridge” of yeast. This is a fairly thick mixture of strong bread flour and water, with the yeast growing in it, slowly consuming the flour to produce more yeast.

https://mwidlake.files.wordpress.com/2020/12/screenhunter_-596.jpg?w=150 150w, https://mwidlake.files.wordpress.com/2020/12/screenhunter_-596.jpg 404w" sizes="(max-width: 300px) 100vw, 300px" />

big voids to lose your topping through…

This “porridge” is called the Starter, and you add it to a mixture of more bread flour, water, and a little salt, to make your bread dough for baking. The starter smells quite strongly, distinctly sour, and I suspect (but am not sure) that sourdough bread is named more for the smell of the starter than the final loaf, which only has a hint of the smell if any at all.

The bread itself also has a distinctive tang to it, not as marked as the smell of the starter mixture, but it is a key part of the flavour.

The crust is an important part of a sourdough loaf. It tends to be thicker, stronger, and (when fresh), well… crustier than normal bread.

The key to it all is the starter, so how do you create and keep your starter?

 

 

The Jar

You need a sealable jar to hold your starter. I use a Kilner jar, as pictured, but a very large jam jar will probably be fine. The jar needs to be able to hold well over a pint/half litre. My jar can hold a litre, which is large enough to generate enough sourdough starter for a good sized loaf but not so large it won’t fit in my fridge (which is important).
https://mwidlake.files.wordpress.com/2020/12/screenhunter_-594.jpg?w=600
600w, https://mwidlake.files.wordpress.com/2020/12/screenhunter_-594.jpg?w=150 150w" sizes="(max-width: 300px) 100vw, 300px" />

Once you have your jar, make sure you have:

  • a packet of white strong bread flour.
  • either some grapes or apples or, if you can manage it, some starter from a friend.
  • at least a week before you want an actual loaf of your own sourdough bread.

I would recommend you use white bread flour as brown or wholemeal (or even seeded) not only provides bits in your mixture where yeast cells would struggle to get to (so might make it more likely for your starter to get infected and “go off”) but as you add quite a bit of starter to the final dough, it’s always going to be partially wholemeal or brown if that is what your starter is based on, no matter what you want.

It has to be strong bread flour. Strong bread flour has a higher percentage of protein, gluten, in it. This is vital to support the texture of bread. Cake is lighter than bread and normal flour that you make cakes out of has less gluten in it.

Sterilise your jar before you use it. Either wash it in really hot water or, preferably, but it in an oven at about 120C for 20, 30 minutes. Let it cool to room temperature before you use it though. You want to sterilise it as the idea is to get a yeast colony growing in the jar that will out-compete bacteria and not-yeast fungi and keep the mixture clean and edible and not poisonous. To begin with there will not be a lot of yeast cells and any bacteria or fungus present could make the mixture bad before the yeast takes hold.

Making the starter

Put about 300 grams of the strong white bread flour in the jar and add about 300ml of water, stirring it. you might want to add the water in two or three parts, mixing it well as you go but don’t stir it for minutes. You will hopefully end up with a smooth mixture that is a bit thicker than porridge/wallpaper paste/pesto. Now add a little more water until it *is* the consistency of porridge. Thin enough that it would pour, thickly, but thick enough so that a spoon stuck in it will probably stay in place. Don’t forget to take the spoon out…

Now the tricky bit. Getting the yeast in it. Don’t use baker’s yeast or brewer’s yeast or anything you would buy to make a normal loaf of bread, you want something slower growing and, if possible, local. In some places, at least in the UK, you might have enough yeast in the air to get it going, especially if you live in the countryside near orchards. Leave the jar with the lid open for a few hours and then shut it. A more reliable way to get the yeast is to take the skin off four or five grapes, preferably ones you have had in the house a few days, or some peel (just a couple of long stripes) from an apple, either a locally grown one or one that’s been hanging about in the fruit bowl a few days (but is not rotten!!!). The peel from fruits like this are covered in many yeasts. Use only the peel, not the pulp of the fruit. Chop the peel into little bits and throw it in the mixture and stir.

https://mwidlake.files.wordpress.com/2020/12/screenhunter_-597.jpg?w=600 600w, https://mwidlake.files.wordpress.com/2020/12/screenhunter_-597.jpg?w=150 150w" sizes="(max-width: 300px) 100vw, 300px" />

The yeasts on the skin will get it all going

If you are lucky enough to know someone who already makes sourdough who is local (in which case, why are you reading this?!? Go have a cup of tea with them or a glass of wine and get them to show you how to do all this – relevant covid-19 restrictions allowing of course) then get some off them, about 30ml will be more than enough. I got some from a local bakery a couple of years back who specialised in sourdough. You can even use dried out sourdough, as I did once. I’ll put the little story of that in another post.

The advantage of using some existing starter mix is that it gets going quicker and you an be pretty sure it will work. Getting your starter fully active from scratch using peel or the air can take weeks, a dollop of starter in it’s prime will get you a fully active new starter in days. I swap the jar I keep my starter in every few months, as they can get a bit gungy & crusty, I make the bread/water porridge and chuck in about 200ml of my existing mixture – usually what is left when I am making a loaf. I can use the “new” starter created in this way in a couple of days.

Shut the jar. If you were lucky enough to use existing starter, keep it out at cool room temperature if you are making a loaf in a day or two. Otherwise put it in the fridge.

If you really are starting from fresh, with peel, put the jar somewhere that is “cool room temperature”, that is about 16-18C, not near a radiator or source of heat, not somewhere cold. Hopefully, in a few days you will see little bubbles in the mixture. That means the yeast is growing and releasing carbon dioxide! After about 5 days, whether you see little bubbles or not, take out about a third of the mixture and discard, replace with the same volume of flour/water mix that you removed, give it all a good stir and seal the jar again. Do so again in another 5 days. If you do not see any bubbles by now, it has probably failed. Discard and start again.

https://mwidlake.files.wordpress.com/2020/12/screenhunter_-599.jpg?w=512 512w, https://mwidlake.files.wordpress.com/2020/12/screenhunter_-599.jpg?w=128 128w" sizes="(max-width: 256px) 100vw, 256px" />

A starter in it’s prime, a day after being fed

If the mixture develops any colour other than pale cream/oatmeal (so if it goes green or purple or pink or grey) you now have a jar of poison. Bacteria or fungus have won and out-competed the yeast. If there are spots of grey or other colour on the surface, or fluffy spots, again it is poison. Throw the contents away, sterilise the jar, try again.

Once you have a pale cream/maybe very slightly oatmeal coloured gloop that bubbles a bit you have your starter. Well done. You now have a new pet in your life.

Looking After The Starter

Once you have created the starter you have actually created a living colony – and you have to feed and care for it. If the yeast runs out of food it will go dormant and that opens the door to bacteria or moulds getting a foothold and growing. You have to keep the yeast active and reproducing. To do this you feed it.

Professional bakers who are making a lot of sourdough bread are constantly taking out part of the starter mixture and using it in the dough. An 800 gram loaf will use between 150 and 250 grams of starter depending on how they make the dough. This is replaced with the same volume of flour/water mixture they take out. You can do this yourself, if you are going to make a new loaf every few days you can keep the starter at room temperature and replace what you take out with flour/water mix. The yeast in the remaining starter quickly works through the added mix and new yeast cells grow.

If you are going to make a loaf once a week you can extend this process by putting the starter in the fridge. You take the starter out the fridge a day before you are going to use it. This is so it warms up and becomes more active. If you have space in the jar, you might want to add a bit of extra flour/water mix for the yeast’s breakfast (about 100 grams flour) when you take it out the fridge – I do. You take out about a third of the starter when you make the loaf the next day and replace it with flour/water mix. I leave my jar out for a few hours/overnight after this to let it get going and then you put it back in the fridge.

If you keep your starter for more than a week in the fridge, or 3 or 4 days at room temperature, without using it, you have to feed it. Take out a third of the mixture and discard, replace with water/flour mix that you stir into the starter. So long as you regularly feed the starter it will last pretty much forever, but of course you are simply throwing away flour all the time.

If you are a bad starter owner and you forget about it, it won’t be happy. A layer of fluid will separate out at the top of the mixture and it will go grey. Grey is bad. If this happens, if the fluid and only the very surface of the starter are a light grey, no fluff, you can pour off the fluid and the top third of the starter, feed it, and it might be OK. I’ve brought back starters from grey gloom a few times. However, the starter won’t make a good loaf again until you have fed it a couple of times. If the grey comes back straight away, you best put the poor thing down.

If your starter or anything in the jar goes pink, orange, purple, green, or fluffy, you have let the yeast get too weak and you have grown something new. It might be useful to a microbiologist, it could even contain a new antibiotic unknown to man, but it is far, far more likely to be poison. Throw it away, start again.

When you feed the starter, make sure there is space for it to expand. I keep my jar about half full. When I feed it, the contents expand with the CO2 and then subside. If the jar is too full, there is no space to expand. Also, I suspect my jar leaks every so slightly so no pressure builds up. If your jar is totally sealed you might have issues with it spraying out when you open it. Let me know if you do, photographs of the mess would be appreciated.

The more regularly you use the starter, the better will be the bread you make. When I’ve kept my starter out of the fridge for a week or two and either made a loaf or simply fed the starter every 3 or 4 days, it gets more active and the dough rises more readily when I make a loaf. If I leave the mixture in the fridge for a month, only occasionally feeding it, the first loaf I make from it struggles to rise.

Starters Vary

I’ve occasionally had two starters running at the same time. I once had my home-grown starter and also one seeded from some starter given to me by Jože. I’ve also had a starter that was initiated from a sample from a local baker’s, as I have said, and I’ve created a new starter from scratch when I already had one going. The bread made from different starters have slightly different tastes. And the one I got from Jože was more active than my home grown one. I have to say, I did not notice much difference between the two home grown starters I had. I am sure this is down to a difference in the actual yeasts in the mixture (or not, in the case of my two home-grown ones).

I discussed this with a fellow Oracle Presenter Baker and we decided it was highly likely that the actual yeasts in there not only vary with where the seed material came from but also how you keep it. If you keep it in the fridge, yeasts that are more tolerant of cold conditions will survive better, keep the starter at room temperature and those yeasts that reproduce faster in warmer conditions will take over.

Whatever, a loaf of sourdough bread you make from your own starter is a real treat. I’ll describe my baking process in the next post.

 

Switching from CentOS 8 to Oracle Linux 8 (OL8)

If you’re a CentOS user, you’ve probably already seen Red Hat are ditching CentOS, and CentOS 8 will be the first casualty. At the time of writing Red Hat haven’t released a clear plan for what CentOS users should do. Neither Fedora or CentOS Stream are viable options for people looking for long term stability. There’s a suggestion that cut price RHEL licenses may be available in future, but all we know at this point is CentOS is on a road to nowhere.

One of the options you might want to consider is switching from CentOS 8 to Oracle Linux 8. Oracle Linux is a binary clone of RHEL, like CentOS. Also like CentOS, you can use Oracle Linux for free (downloads here), and that includes free updates.

But what do you do about your existing CentOS 8 installations? Well that’s really easy too, as you can convert them directly to Oracle Linux. Oracle have an overview of the process here, but it boils down to downloading a script from GitHub and running it. Here are the steps I used to do a conversion.

First, take a backup of the system, so you have a fallback restore point.

We display the contents of the “/etc/centos-release” file, which shows we have a CentOS 8.2 installation.

$ cat /etc/centos-release
CentOS Linux release 8.2.2004 (Core)
$

Download the conversion script from the Oracle GitHub repo, and run it. Then wait while it downloads the packaged as switches you from CentOS to Oracle Linux.

curl -O https://raw.githubusercontent.com/oracle/centos2ol/main/centos2ol.sh
sudo bash centos2ol.sh

Reboot the system.

sudo reboot

We now have an Oracle Linux server running the UEK kernel.

$ sudo cat /etc/oracle-release
Oracle Linux Server release 8.3
$ uname -r
5.4.17-2036.101.2.el8uek.x86_64
$

If you don’t want to use the UEK kernel, you can switch to the Red Hat Compatibility Kernel really easily. List the current kernels.

$ ls -l /boot/vmlinuz-*
-rwxr-xr-x. 1 root root 8913656 Oct 22 22:59 /boot/vmlinuz-0-rescue-5fd85e2afa24422eb63894e2dbfa9898
-rwxr-xr-x. 1 root root 8975912 Dec 18 18:07 /boot/vmlinuz-0-rescue-caad1bd0b25943b1b526a131661074b3
-rwxr-xr-x. 1 root root 8920200 Sep 14 14:45 /boot/vmlinuz-4.18.0-193.19.1.el8_2.x86_64
-rwxr-xr-x. 1 root root 9520664 Dec 16 00:42 /boot/vmlinuz-4.18.0-240.8.1.el8_3.x86_64
-rwxr-xr-x. 1 root root 8975912 Dec 3 02:02 /boot/vmlinuz-5.4.17-2036.101.2.el8uek.x86_64
$

Set the latest Red Hat Compatibility Kernel as the default. It will be the highest version one without “uek” in the name.

$ sudo grubby --set-default /boot/vmlinuz-4.18.0-240.8.1.el8_3.x86_64
The default is /boot/loader/entries/caad1bd0b25943b1b526a131661074b3-4.18.0-240.8.1.el8_3.x86_64.conf with index 3 and kernel /boot/vmlinuz-4.18.0-240.8.1.el8_3.x86_64
$

Reboot the server.

$ sudo reboot

Now we are using the Red Hat Compatibility Kernel rather than UEK.

$ cat /etc/oracle-release
Oracle Linux Server release 8.3
$
$ uname -r
4.18.0-240.8.1.el8_3.x86_64
$

Easy!

At this point we need to do some testing to make sure we are happy with the final result!

This type of switch may not be the preferred solution for every system, but it’s simple and saves you doing a full rebuild to switch to another distribution.

If you’re still not sure about Oracle Linux, here’s a FAQ post I wrote about it.

Cheers

Tim…

The post Switching from CentOS 8 to Oracle Linux 8 (OL8) first appeared on The ORACLE-BASE Blog.


Switching from CentOS 8 to Oracle Linux 8 (OL8) was first posted on December 17, 2020 at 9:06 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

2020 in review

Well…its been quite a year!  I downloaded the Oxford “Words of the Year” report which came out with gems such as

  • Bushfires
  • Impeachment
  • Coronavirus
  • Lockdown
  • Social Distancing
  • Zoom Bombing
  • Mask Shaming
  • Black Lives Matter
  • Cancel Culture
  • Superspreader

amongst others. “Covidiot” cracked a mention and remains a personal favourite Smile, although reading the list above could perhaps be considered just a reinforcement of another of the words…”doom scrolling”.

Business as usual

It is astonishing that just 11 months ago, the year had commenced like any other. Advocates, Aces and PM’s all came together to share their knowledge at the Israel Oracle User Group event in Tel Aviv.

20200202_081408

I grabbed this next pic from the front of Oren Nakdimon‘s place where he and his family hosted us for a magnificent dinner – definitely a highlight of the trip, and in hindsight, a brutal reminder of the importance (and subsequent loss) of face to face networking.

20200202_182857

After the event, I took a couple of days vacation and travelled to Petra for the first time, which was breathtaking.

20200205_124503

20200205_172834

Enough time has passed now for me to share a funny tale about the Petra experience.

We left Petra around 5pm, and then its a 5 hour bus ride back to Tel Aviv. About half way through the ride, in the pitch black of the desert, our bus was brought to a halt by our driver at a small intersection. Two small mini-vans then arrived from another direction and our driver told us all to exit the bus and get into the mini-vans. This wasn’t mentioned on the itinerary, but with slightly raised heart rates, the ten of us travelling split into the two vans.

Off into the dark we drove in a new direction. After about 40 minutes, in the absolute middle on nowhere our minivan came to a halt and our driver switched off the engine, and thus we sat in silence and darkness. No explanation given. As we soon discovered, our driver did not speak english and none of us were fluent in the local dialect. Another ten minutes passed and every time we tried to communicate with the driver, he simply made a “Sssh” sign with his finger. Now a good deal of panic was starting to set in, and we started to discuss fleeing the bus, to rushing the driver, and countless other options that are borne out of paranoia. Just as we were doing this, two taxis emerged on the horizon. Even then, we were thinking to ourselves: “Is there where we are all never seen again?” Thankfully, one of the taxi drivers spoke english and shed some light. It all turned out to be a miscommunication – the large bus was needed elsewhere, the minivans were a substitute, and ultimately the tour company decided it would be cheaper to grab taxis from the current point and take the minivans back to their base.

Or at least that’s what we were told Smile

Then onto Italy, where as always the Italian Oracle User Group were fantastic hosts, treating us to a wonderful tech event, as well as experiencing the sights (or should I say sites Smile) of Rome and Milan! Thank you again to Francesco, Roberto, Gianni and the rest of the team for their hospitality.

20200128_153617

20200128_205228

The only time Francesco doesn’t wave his arms when speaking, is when there is a glass in his hand Smile

20200131_132938

20200201_101634

And of course, coffee as only coffee should be!

Reaching out to the next generation

In February I spoke at SINFO, a tech conference organised and run entirely by a student body in Lisbon, Portugal. It was nice to speak to a large group of tech-passionate individuals who were still “finding their way” in the ever changing I.T landscape. Many were unaware that databases underpinned so much of the software they used on a daily basis.

A nice surprise on the flight to Portugal, was that one of the flight attendants was a friend I had coached at volleyball many years ago in Perth. Sally was nice enough to take a few minutes for a chat and a selfie, but alas, the upgrade to business class did not eventuate Smile

20200219_151554

20200220_113216

20200222_092830

And then everything changed

My itinerary for March initially read like this:

  • London for a meetup,
  • then to Ireland for their event,
  • then to Netherlands to meet with Qualogy,
  • then onto Romania to present at their user group meeting,
  • back to Netherlands for an APEX event,
  • then to London for customer meetings and then home

Then as the COVID situation grew in seriousness and global scope, there was a manic few weeks where events were being changed and/or cancelled on a day to day basis. The Ireland event got cancelled, and so I went straight on to the Netherlands for the Qualogy event, which was a great success. Big thanks to Patrick Barel for his organisation of that event,

Then the APEX event got postponed and customer meetings in London were abandoned. I was starting to panic about travel, and thus I did the Romanian event as a virtual one via Zoom from my Amsterdam Hotel, thanks to a lot of assistance from Mirela from the Romanian User Group.

The seriousness the COVID situation really hit home when I saw a notification from the Australian foreign affairs department advising that if I didn’t get home in the next few days, then the borders would be shut with no target date for re-opening. I then spent more time on the phone with travel companies than any other time in my life! This was followed by an anxious 24 hour wait in Amsterdam trying to get all of the ducks in a row so I could get home to Perth. Thankfully, Amsterdam is the perfect place to wander to get rid of all the stress.

20200312_112207

Finally the confirmations came through and I manage to rearrange my travel to immediately head home. Until that moment, I was unaware how lucky I was until I discovered that I had scored one of the very last flights to Perth! Thus when I entered transit in Dubai, I was met with this sight – one that I doubt will ever be repeated. Besides a few remaining shop staff, as far as the eye could see, I was the only person in the terminal!

image

A virtual world

With face to face meetings on hold, database advocacy still must go on so virtual events became the norm.

Luckily for us, AskTOM had been running the Office Hours program for 2 years, so we were already well equipped in terms of running Zoom sessions, and it was a privilege to be asked by others within the Oracle organisation for tips and techniques on becoming fluent with the various hardware/software technologies that make for good virtual presentations.

For me, this was an extra challenge because most sessions I give are in the evening Perth time, so having anything resembling natural lighting is difficult. Thankfully, LEDs don’t draw too much power, otherwise I think I would need to invest in some solar powerSmile. And who needs expensive acoustic sound panels when a blanket can just be hung from the ceiling!

20200814_010144

So whilst some very late nights have become the norm, I also appreciate the fact that the embrace of virtual events means I could “attend” some conferences that I normally would not be able to due to the travel distances. However, nothing will ever replace the fulfilment of face to face events, and already we’re starting to the see effects of “zoom burnout“. One of the big challenges for 2021 is how to avoid virtual events being viewed as “just another webinar”.

But one personal benefit of the new virtual approach to conferences, was that more time at home meant that I could bring an new addition into the family. I got Bailey from a greyhound rescue centre in April, and he occasionally makes an appearance in Zoom calls.

bailey_Img_9004

Wrapping up 2020

With all the challenges of 2020, I hope you have still got value out of my tech content this year. After all, my job is to make you be more successful with the Oracle Database. I think I’ve still managed reasonably productive output this year:

  • Presented 46 conference sessions,
  • Published 81 videos,
  • Published 67 blog posts,
  • Answered ~600 AskTOM questions,
  • Responded to ~1000 AskTOM comments

but I’m always keen to hear your ideas, your needs and yes, your frustrations as well. There have been a lot of cool new things in the Oracle landscape this year, and in particular, lots of great leaps forward in the amount of free stuff you can use.

So I look forward to hearing from you in 2021. Until then, in whichever way you observe the festive season, please stay safe and healthy, and hopefully 2021 will continue to strengthen our Oracle community.

image

PS – if you’d like a funny (but very NSFW) summation of 2020, here’s a classic made by Jason Redwine on Twitter from July.

Oracle 19c Automatic Indexing: Currently Broken In Autonomous ATP Databases? (What In The World)

  I’ve been playing with the free tier Oracle Autonomous Databases for quite some time, but unfortunately in recent times, I’ve hit a bit of a snag. The Automatic Index capability appears to be currently broken… The Automatic Indexing task appears to have stopped running, yes even with the AUTO_INDEX_MODE set to “IMPLEMENT” and with […]

State of the Word 2020

State of the Word is an annual keynote address delivered by WordPress project co-founder, Matt Mullenweg. Catch this year’s keynote in the video below!

New to State of the Word?

If this is your first time hearing of this talk and want to learn more, you’re in luck! Check out previous recordings below.

New from WordPress.com Courses: Podcasting for Beginners

Would you like to learn how to create your own podcast or improve your existing podcast? WordPress.com Courses is excited to offer our new on-demand course, Podcasting for Beginners. We’ll help you get started, learn how to publish, and even how to use your podcast to make a living.  

Our courses are flexible. You can join, and learn at your own pace. But that’s just the start. Podcasting for Beginners is more than just a course —  it’s a community that gives you access to regular Office Hours hosted by WordPress experts. A place where you can ask questions, share your progress, and pick up a few tips along the way. 

Lessons include step-by-step videos covering:

  • The Foundations (Curating your content and an editorial calendar.) 
  • Interviews (Recording, editing, and outreach.) 
  • Configuring Your Site (Integrating your podcast into your site and distributing it.) 
  • Growing Your Community (Engaging with listeners.) 
  • Making Money (Monetization basics and preparing for the future.) 

Let us take you from “What is podcasting?” to launching a podcast of your own.

Cost: A $99 annual subscription gives you unlimited access to course content, our online community, and virtual sessions.

Join now as our first 100 customers will enjoy 50% off the subscription fee with the code PODCAST50.

Efficiently query DBA_EXTENTS for FILE_ID / BLOCK_ID

By Franck Pachot

.
This was initially posted to CERN Database blog on Thursday, 27 September 2018 where it seems to be lost. Here is a copy thanks to web.archive.org

Did you ever try to query DBA_EXTENTS on a very large database with LMT tablespaces? I had to, in the past, in order to find which segment a corrupt block belonged to. The information about extent allocation is stored in the datafiles headers, visible though X$KTFBUE, and queries on it can be very expensive. In addition to that, the optimizer tends to start with the segments and get to this X$KTFBUE for each of them. At this time, I had quickly created a view on the internal dictionary tables, forcing to start by X$KTFBUE with materialized CTE, to replace DBA_EXTENTS. I published this on dba-village in 2006.

I recently wanted to know the segment/extend for a hot block, identified by its file_id and block_id on a 900TB database with 7000 datafiles and 90000 extents, so I went back to this old query and I got my result in 1 second. The idea is to be sure that we start with the file (X$KCCFE) and then get to the extent allocation (X$KTFBUE) before going to the segments:

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Capture... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Capture... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Capture... 768w" sizes="(max-width: 1262px) 100vw, 1262px" />

So here is the query:


column owner format a6
column segment_type format a20
column segment_name format a15
column partition_name format a15
set linesize 200
set timing on time on echo on autotrace on stat
WITH
 l AS ( /* LMT extents indexed on ktfbuesegtsn,ktfbuesegfno,ktfbuesegbno */
  SELECT ktfbuesegtsn segtsn,ktfbuesegfno segrfn,ktfbuesegbno segbid, ktfbuefno extrfn,
         ktfbuebno fstbid,ktfbuebno + ktfbueblks - 1 lstbid,ktfbueblks extblks,ktfbueextno extno
  FROM sys.x$ktfbue
 ),
 d AS ( /* DMT extents ts#, segfile#, segblock# */
  SELECT ts# segtsn,segfile# segrfn,segblock# segbid, file# extrfn,
         block# fstbid,block# + length - 1 lstbid,length extblks, ext# extno
  FROM sys.uet$
 ),
 s AS ( /* segment information for the tablespace that contains afn file */
  SELECT /*+ materialized */
  f1.fenum afn,f1.ferfn rfn,s.ts# segtsn,s.FILE# segrfn,s.BLOCK# segbid ,s.TYPE# segtype,f2.fenum segafn,t.name tsname,blocksize
  FROM sys.seg$ s, sys.ts$ t, sys.x$kccfe f1,sys.x$kccfe f2 
  WHERE s.ts#=t.ts# AND t.ts#=f1.fetsn AND s.FILE#=f2.ferfn AND s.ts#=f2.fetsn
 ),
 m AS ( /* extent mapping for the tablespace that contains afn file */
SELECT /*+ use_nl(e) ordered */
 s.afn,s.segtsn,s.segrfn,s.segbid,extrfn,fstbid,lstbid,extblks,extno, segtype,s.rfn, tsname,blocksize
 FROM s,l e
 WHERE e.segtsn=s.segtsn AND e.segrfn=s.segrfn AND e.segbid=s.segbid
 UNION ALL
 SELECT /*+ use_nl(e) ordered */ 
 s.afn,s.segtsn,s.segrfn,s.segbid,extrfn,fstbid,lstbid,extblks,extno, segtype,s.rfn, tsname,blocksize
 FROM s,d e
  WHERE e.segtsn=s.segtsn AND e.segrfn=s.segrfn AND e.segbid=s.segbid
 UNION ALL
 SELECT /*+ use_nl(e) use_nl(t) ordered */
 f.fenum afn,null segtsn,null segrfn,null segbid,f.ferfn extrfn,e.ktfbfebno fstbid,e.ktfbfebno+e.ktfbfeblks-1 lstbid,e.ktfbfeblks extblks,null extno, null segtype,f.ferfn rfn,name tsname,blocksize
 FROM sys.x$kccfe f,sys.x$ktfbfe e,sys.ts$ t
 WHERE t.ts#=f.fetsn and e.ktfbfetsn=f.fetsn and e.ktfbfefno=f.ferfn
 UNION ALL
 SELECT /*+ use_nl(e) use_nl(t) ordered */
 f.fenum afn,null segtsn,null segrfn,null segbid,f.ferfn extrfn,e.block# fstbid,e.block#+e.length-1 lstbid,e.length extblks,null extno, null segtype,f.ferfn rfn,name tsname,blocksize
 FROM sys.x$kccfe f,sys.fet$ e,sys.ts$ t
 WHERE t.ts#=f.fetsn and e.ts#=f.fetsn and e.file#=f.ferfn
 ),
 o AS (
  SELECT s.tablespace_id segtsn,s.relative_fno segrfn,s.header_block   segbid,s.segment_type,s.owner,s.segment_name,s.partition_name
  FROM SYS_DBA_SEGS s
 ),
datafile_map as (
SELECT
 afn file_id,fstbid block_id,extblks blocks,nvl(segment_type,decode(segtype,null,'free space','type='||segtype)) segment_type,
 owner,segment_name,partition_name,extno extent_id,extblks*blocksize bytes,
 tsname tablespace_name,rfn relative_fno,m.segtsn,m.segrfn,m.segbid
 FROM m,o WHERE extrfn=rfn and m.segtsn=o.segtsn(+) AND m.segrfn=o.segrfn(+) AND m.segbid=o.segbid(+)
UNION ALL
SELECT
 file_id+(select to_number(value) from v$parameter WHERE name='db_files') file_id,
 1 block_id,blocks,'tempfile' segment_type,
 '' owner,file_name segment_name,'' partition_name,0 extent_id,bytes,
  tablespace_name,relative_fno,0 segtsn,0 segrfn,0 segbid
 FROM dba_temp_files
)
select * from datafile_map where file_id=5495 and 11970455 between block_id and block_id+blocks

And here is the result, with execution statistics:



   FILE_ID   BLOCK_ID     BLOCKS SEGMENT_TYPE         OWNER  SEGMENT_NAME    PARTITION_NAME    EXTENT_ID      BYTES TABLESPACE_NAME      RELATIVE_FNO     SEGTSN     SEGRFN    SEGBID
---------- ---------- ---------- -------------------- ------ --------------- ---------------- ---------- ---------- -------------------- ------------ ---------- ---------- ----------
      5495   11964544            8192 INDEX PARTITION LHCLOG DN_PK           PART_DN_20161022 1342         67108864 LOG_DATA_20161022            1024       6364       1024        162

Elapsed: 00:00:01.25

Statistics
----------------------------------------------------------
        103  recursive calls
       1071  db block gets
      21685  consistent gets
        782  physical reads
        840  redo size
       1548  bytes sent via SQL*Net to client
        520  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Knowing the segment from the block address is important in performance tuning, when we get the file_id/block_id from wait event parameters. It is even more important when a block corrution is detected ans having a fast query may help.

Cet article Efficiently query DBA_EXTENTS for FILE_ID / BLOCK_ID est apparu en premier sur Blog dbi services.

Amazon Aurora: calling a lambda from a trigger

By Franck Pachot

.
You may want your RDS database to interact with other AWS services. For example, send a notification on a business or administration situation, with a “push” method rather than a “pull” one from a Cloud watch alert. You may even design this call to be triggered on database changes. And Amazon Aurora provides this possibility by running a lambda from the database through calling mysql.lambda_async() from a MySQL trigger. This is an interesting feature but I think that it is critical to understand how it works in order to use it correctly.
This is the kind of feature that looks very nice on a whiteboard or powerpoint: the DML event (like an update) runs a trigger that calls the lambda, all event-driven. However, this is also dangerous: are you sure that every update must execute this process? What about an update during an application release, or a dump import, or a logical replication target? Now imagine that you have a bug in your application that has set some wrong data and you have to fix it in emergency in the production database, under stress, with manual updates and aren’t aware of that trigger, or just forget about it in this situation… Do you want to take this risk? As the main idea is to run some external service, the consequence might be very visible and hard to fix, like spamming all your users, or involuntarily DDoS a third-tier application.

I highly encourage to encapsulate the DML and the call of lambda in a procedure that is clearly named and described. For example, let’s take a silly example: sending a “your address has been changed” message when a user updates his address. Don’t put the “send message” call in an AFTER UPDATE trigger. Because the UPDATE semantic is to update. Not to send a message. What you can do is write a stored procedure like UPDATE_ADDRESS() that will do the UPDATE, and call the “send message” lambda. You may even provide a boolean parameter to enable or not the sending of the message. Then, the ones who call the stored procedure know what will happen. And the one who just do an update,… will just do an update. Actually, executing DML directly from the application is often a mistake. A database should expose business-related data services, like many other components of your application architecture, and this is exactly the goal of stored procedures.

I’m sharing here some tests on calling lambda from Aurora MySQL.

Wiring the database to lambdas

A lambda is not a simple procedure that you embed in your program. It is a service and you have to control the access to it:

  • You create the lambda (create function, deploy and get the ARN)
  • You define an IAM policy to invoke this lambda
  • You define an IAM role to apply this policy
  • You set this role as aws_default_lambda_role in the RDS cluster parameter group
  • You add this role to the cluster (RDS -> database cluster -> Manage IAM roles)

Here is my lambda which just logs the event for my test:
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 1536w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 2048w" sizes="(max-width: 2560px) 100vw, 2560px" />


import json

def lambda_handler(event, context):
    print('Hello.')
    print(event)
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }

Creating the test database


 drop database if exists demo;
 create database demo;
 use demo;
 drop table if exists t;
 create table t ( x int, y int );
 insert into t values ( 1, 1 );

I have a simple table here, with a simple row.


delimiter $$
create trigger t_bufer before update on t for each row
begin
 set NEW.y=OLD.x;
 call mysql.lambda_async(
    'arn:aws:lambda:eu-central-1:802756008554:function:log-me',
    concat('{"trigger":"t_bufer","connection":"',connection_id(),'","old": "',OLD.x,'","new":"',NEW.x,'"}'));
end;
$$
delimiter ;

This is my trigger which calls my lambda on an update with old and new value in the message.


MYSQL_PS1="Session 1 \R:\m:\s> " mysql -v -A --host=database-1.cluster-ce5fwv4akhjp.eu-central-1.rds.amazonaws.com --port=3306 --user=admin --password=ServerlessV2

I connect a first session , displaying the time and session in the prompt.


Session 1 23:11:55> use demo;
Database changed

Session 1 23:12:15> truncate table t;
--------------
truncate table t
--------------

Query OK, 0 rows affected (0.09 sec)

Session 1 23:12:29> insert into t values ( 1, 1 );
--------------
insert into t values ( 1, 1 )
--------------

Query OK, 1 row affected (0.08 sec)

this hust resets the testcase when I want to re-run it.


Session 1 23:12:36> start transaction;
--------------
start transaction
--------------

Query OK, 0 rows affected (0.07 sec)

Session 1 23:12:48> update t set x = 42;
--------------
update t set x = 42
--------------

Query OK, 1 row affected (0.11 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Session 1 23:12:55> rollback;
--------------
rollback
--------------

Query OK, 0 rows affected (0.02 sec)

I updated one row, and rolled back my transaction. This is to show that you must be aware that calling a lambda is out of the ACID protection of relational databases. The trigger is executed during the update, without knowing if the transaction will be committed or not (voluntarily or because an exception is encountered). When you do only things in the database (like writing into another table) there is no problem because this happens within the transaction. If the transaction is rolled back, all the DML done by the triggers are rolled back as well. Even if they occurred, nobody sees their effect, except the current session, before the whole transaction is committed.

But when you call a lambda, synchronously or asynchronously, the call is executed and its effect will not be rolled back if your transaction does not complete. This can be ok in some cases, if what you execute is related to the intention of the update and not its completion. Or you must manage this exception in your lambda, maybe by checking in the database that the transaction occurred. But in that case, you should really question your architecture (a call to a service, calling back to the caller…)

So… be careful with that. If your lambda is there to be executed when a database operation has been done, it may have to be done after the commit, in the procedural code that has executed the transaction.

Another test…

This non-ACID execution was the important point I wanted to emphasize, so you can stop here if you want. This other test is interesting for people used to Oracle Database only, probably. In general, nothing guarantees that a trigger is executed only once for the triggering operation. What we have seen above (rollback) can be done internally when a serialization exception is encountered and the database can retry the operation. Oracle Database has non-blocking reads and this is not only for SELECT but also for the read phase of an UPDATE. You may have to read a lot of rows to verify the predicate and update only a few ones, and you don’t want to lock all the rows read but only the ones that are updated. Manually, you would do that with a serializable transaction and retry in case you encounter a rows that have been modified between your MVCC snapshot and the current update time. But at statement level, Oracle does that for you.

It seems that it does not happen in Aurora MySQL and PostgreSQL, as the locking for reads is more aggressive, but just in case I tested the same scenario where an update restart would have occurred in Oracle.


Session 1 23:13:00> start transaction;
--------------
start transaction
--------------

Query OK, 0 rows affected (0.06 sec)

Session 1 23:13:09> update t set x = x+1;
--------------
update t set x = x+1
--------------

Query OK, 1 row affected (0.02 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Session 1 23:13:16> select * from t;
--------------
select * from t
--------------

+------+------+
| x    | y    |
+------+------+
|    2 |    1 |
+------+------+
1 row in set (0.01 sec)

Session 1 23:13:24>

I have started a transaction that increased the value of X, but the transaction is still open. What I do next is from another session.

This is session 2:


Session 2 23:13:32> use demo;

Database changed
Session 2 23:13:34>
Session 2 23:13:35> select * from t;
--------------
select * from t
--------------

+------+------+
| x    | y    |
+------+------+
|    1 |    1 |
+------+------+
1 row in set (0.01 sec)

Of course, thanks to transaction isolation, I do not see the uncommitted change.


Session 2 23:13:38> update t set x = x+1 where x > 0;
--------------
update t set x = x+1 where x > 0
--------------

At this step, the update hangs on the locked row.

Now back in the first session:


Session 1 23:13:49>
Session 1 23:13:50>
Session 1 23:13:50>
Session 1 23:13:50> commit;
--------------
commit
--------------

Query OK, 0 rows affected (0.02 sec)

I just commited my change here, so X has been increased to the value 2.

And here is what happened in my seconds session, with the lock released by the first session:


Query OK, 1 row affected (11.42 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Session 2 23:13:58> commit;
--------------
commit
--------------

Query OK, 0 rows affected (0.01 sec)

Session 2 23:14:10> select * from t;
--------------
select * from t
--------------

+------+------+
| x    | y    |
+------+------+
|    3 |    2 |
+------+------+
1 row in set (0.01 sec)

Session 2 23:14:18>

This is the correct behavior. Even if a select sees the value of X=1 the update cannot be done until the first session has committed its transaction. This is why it waited, and it has read the committed value of X=2 which is then incremented to 3.

And finally here is what was logged by my lambda, as a screenshot and as text:

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 1536w" sizes="(max-width: 2206px) 100vw, 2206px" />


2020-12-13T23:12:55.558+01:00	START RequestId: 39e4e41f-7853-4b11-a12d-4a3147be3fc7 Version: $LATEST	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8
2020-12-13T23:12:55.561+01:00	Hello.	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8
2020-12-13T23:12:55.561+01:00	{'trigger': 't_bufer', 'connection': '124', 'old': '1', 'new': '42'}	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8
2020-12-13T23:12:55.562+01:00	END RequestId: 39e4e41f-7853-4b11-a12d-4a3147be3fc7	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8
2020-12-13T23:12:55.562+01:00	REPORT RequestId: 39e4e41f-7853-4b11-a12d-4a3147be3fc7 Duration: 1.16 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 51 MB	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8
2020-12-13T23:13:16.620+01:00	START RequestId: 440128db-d6de-4b2c-aa98-d7bedf12a3d4 Version: $LATEST	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8
2020-12-13T23:13:16.624+01:00	Hello.	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8
2020-12-13T23:13:16.624+01:00	{'trigger': 't_bufer', 'connection': '124', 'old': '1', 'new': '2'}	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8
2020-12-13T23:13:16.624+01:00	END RequestId: 440128db-d6de-4b2c-aa98-d7bedf12a3d4	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8
2020-12-13T23:13:16.624+01:00	REPORT RequestId: 440128db-d6de-4b2c-aa98-d7bedf12a3d4 Duration: 1.24 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 51 MB	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8
2020-12-13T23:13:58.156+01:00	START RequestId: c50ceab7-6e75-4e43-b77d-26c1f6347fec Version: $LATEST	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8
2020-12-13T23:13:58.160+01:00	Hello.	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8
2020-12-13T23:13:58.160+01:00	{'trigger': 't_bufer', 'connection': '123', 'old': '2', 'new': '3'}	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8
2020-12-13T23:13:58.160+01:00	END RequestId: c50ceab7-6e75-4e43-b77d-26c1f6347fec	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8
2020-12-13T23:13:58.160+01:00	REPORT RequestId: c50ceab7-6e75-4e43-b77d-26c1f6347fec Duration: 0.91 ms Billed Duration: 1 ms Memory Size: 128 MB Max Memory Used: 51 MB	2020/12/13/[$LATEST]25e73c8c6f9e4d168fa29b9ad2ba76d8

First, we see at 23:12:55 the update from X=1 to X=42 that I rolled back later. This proves that the call to lambda is not transactional. It may sound obvious but if you come from Oracle Database you would have used Advanced Queuing where the queue is stored in a RDBMS table and then benefit from sharing the same transaction as the submitter.
My update occurred at 23:12:48 but remember that those calls are asynchronous so the log happens a bit later.

Then there was my second test where I updated, at 23:13:09, X from 1 to 2 and we see this update logged at 23:13:16 which is after the update, for the asynchronous reason, but before the commit which happened at 23:13:50 according to my session log above. Then no doubt that the execution of the lambda does not wait for the completion of the transaction that triggered it.

And then the update from the session 2 which was executed at 23:13:38 but returned at 23:13:50 as it was waiting for the first session to end its transaction. The lambda log at 23:13:58 shows it and shows that the old value is X=2 which is expected as the update was done after the first session change. This is where, in Oracle, we would have seen two entries: one updating from X=1, because this would have been read without lock, and then rolled back to restart the update after X=2. But we don’t have this problem here as MySQL acquires a row lock during the read phase.

However, nothing guarantees that there are no internal rollback + restart. And anyway, rollback can happen for many reasons and you should think, during design, whether the call to the lambda should occur for DML intention or DML completion. For example, if you use it for some event sourcing, you may accept the asynchronous delay, but you don’t want to receive an event that actually didn’t occur.

Cet article Amazon Aurora: calling a lambda from a trigger est apparu en premier sur Blog dbi services.

Oracle write consistency bug and multi-thread de-queuing

By Franck Pachot

.
This was initially posted on CERN Database blog where it seems to be lost. Here is a copy thanks to web.archive.org
Additional notes:
– I’ve tested and got the same behaviour in Oracle 21c
– you will probably enjoy reading Hatem Mahmoud going further on Write consistency and DML restart

Posted by Franck Pachot on Thursday, 27 September 2018

Here is a quick test I did after encountering an abnormal behavior in write consistency and before finding some references to a bug on StackOverflow (yes, write consistency questions on StackOverflow!) and AskTOM. And a bug opened by Tom Kyte in 2011, that is still there in 18c.

The original issue was with a task management system to run jobs. Here is the simple table where all rows have a ‘NEW’ status and the goal is to have several threads processing them by updating them to the ‘HOLDING’ status’ and adding the process name.


set echo on
drop table DEMO;
create table DEMO (ID primary key,STATUS,NAME,CREATED)
 as select rownum,cast('NEW' as varchar2(10)),cast(null as varchar2(10)),sysdate+rownum/24/60 from xmltable('1 to 10')
/

Now here is the query that selects the 5 oldest rows in status ‘NEW’ and updates them to the ‘HOLDING’ status:


UPDATE DEMO SET NAME = 'NUMBER1', STATUS = 'HOLDING' 
WHERE ID IN (
 SELECT ID FROM (
  SELECT ID, rownum as counter 
  FROM DEMO 
  WHERE STATUS = 'NEW' 
  ORDER BY CREATED
 ) 
WHERE counter <= 5) 
;

Note that the update also sets the name of the session which has processed the rows, here ‘NUMBER1’.

Once the query started, and before the commit, I’ve run the same query from another session, but with ‘NUMBER2’.


UPDATE DEMO SET NAME = 'NUMBER2', STATUS = 'HOLDING' 
WHERE ID IN (
 SELECT ID FROM (
  SELECT ID, rownum as counter 
  FROM DEMO 
  WHERE STATUS = 'NEW' 
  ORDER BY CREATED
 ) 
WHERE counter <= 5) 
;

Of course, this waits on row lock from the first session as it has selected the same rows. Then I commit the first session, and check, from the first session what has been updated:


commit;
set pagesize 1000
select versions_operation,versions_xid,DEMO.* from DEMO versions between scn minvalue and maxvalue order by ID,2;

V VERSIONS_XID             ID STATUS     NAME       CREATED        
- ---------------- ---------- ---------- ---------- ---------------
U 0500110041040000          1 HOLDING    NUMBER1    27-SEP-18 16:48
                            1 NEW                   27-SEP-18 16:48
U 0500110041040000          2 HOLDING    NUMBER1    27-SEP-18 16:49
                            2 NEW                   27-SEP-18 16:49
U 0500110041040000          3 HOLDING    NUMBER1    27-SEP-18 16:50
                            3 NEW                   27-SEP-18 16:50
U 0500110041040000          4 HOLDING    NUMBER1    27-SEP-18 16:51
                            4 NEW                   27-SEP-18 16:51
U 0500110041040000          5 HOLDING    NUMBER1    27-SEP-18 16:52
                            5 NEW                   27-SEP-18 16:52
                            6 NEW                   27-SEP-18 16:53
                            7 NEW                   27-SEP-18 16:54
                            8 NEW                   27-SEP-18 16:55
                            9 NEW                   27-SEP-18 16:56
                           10 NEW                   27-SEP-18 16:57

I have used flashback query to see all versions of the rows. All 10 have been created and the the first 5 of them have been updated by NUMBER1.

Now, my second session continues, updating to NUMBER2. I commit and look at the row versions again:


commit;
set pagesize 1000
select versions_operation,versions_xid,DEMO.* from DEMO versions between scn minvalue and maxvalue order by ID,2;

V VERSIONS_XID             ID STATUS     NAME       CREATED        
- ---------------- ---------- ---------- ---------- ---------------
U 04001B0057030000          1 HOLDING    NUMBER2    27-SEP-18 16:48
U 0500110041040000          1 HOLDING    NUMBER1    27-SEP-18 16:48
                            1 NEW                   27-SEP-18 16:48
U 04001B0057030000          2 HOLDING    NUMBER2    27-SEP-18 16:49
U 0500110041040000          2 HOLDING    NUMBER1    27-SEP-18 16:49
                            2 NEW                   27-SEP-18 16:49
U 04001B0057030000          3 HOLDING    NUMBER2    27-SEP-18 16:50
U 0500110041040000          3 HOLDING    NUMBER1    27-SEP-18 16:50
                            3 NEW                   27-SEP-18 16:50
U 04001B0057030000          4 HOLDING    NUMBER2    27-SEP-18 16:51
U 0500110041040000          4 HOLDING    NUMBER1    27-SEP-18 16:51
                            4 NEW                   27-SEP-18 16:51
U 04001B0057030000          5 HOLDING    NUMBER2    27-SEP-18 16:52
U 0500110041040000          5 HOLDING    NUMBER1    27-SEP-18 16:52
                            5 NEW                   27-SEP-18 16:52
                            6 NEW                   27-SEP-18 16:53
                            7 NEW                   27-SEP-18 16:54
                            8 NEW                   27-SEP-18 16:55
                            9 NEW                   27-SEP-18 16:56
                           10 NEW                   27-SEP-18 16:57

This is not what I expected. I wanted my second session to process the other rows, but here it seems that it has processed the same rows as the first one. What has been done by the NUMBER1 has been lost and overwritten by NUMBER2. This is inconsistent, violates ACID properties, and should not happen. An SQL statement must ensure write consistency: either by locking all the rows as soon as they are read (for non-MVCC databases where reads block writes), or re-starting the update when a mutating row is encountered. Oracle default behaviour is in the second case, NUMBER2 query reads the rows 1 to 5, because the changes by NUMBER1, not committed yet, are invisible from NUMBER2. But the execution should keep track of the columns referenced in the where clause. When attempting to update a row, now that the concurrent change is visible, the update is possible only if the WHERE clause used to select the rows still selects this row. If not, the database should raise an error (this is what happens in serializable isolation level) or re-start the update when in the default statement-level consistency.

Here, probably because of the nested subquery, the write consistency is not guaranteed and this is a bug.

One workaround is not to use subqueries. However, as we need to ORDER BY the rows in order to process the oldest first, we cannot avoid the subquery. The workaround for this is to add STATUS = ‘NEW’ in the WHERE clause of the update, so that the update restart works correctly.

However, the goal of multithreading those processes is to be scalable, and multiple update restarts may finally serialize all those updates.

The preferred solution for this is to ensure that the updates do not attempt to touch the same rows. This can be achieved by a SELECT … FOR UPDATE SKIP LOCKED. As this cannot be added directly to the update statement, we need a cursor. Something like this can do the job:


declare counter number:=5;
begin
 for c in (select /*+ first_rows(5) */ ID FROM DEMO 
           where STATUS = 'NEW' 
           order by CREATED
           for update skip locked)
 loop
  counter:=counter-1;
  update DEMO set NAME = 'NUMBER1', STATUS = 'HOLDING'  where ID = c.ID and STATUS = 'NEW';
  exit when counter=0;
 end loop;
end;
/
commit;

This can be optimized further but just gives an idea of what is needed for a scalable solution. Waiting for locks is not scalable.

Cet article Oracle write consistency bug and multi-thread de-queuing est apparu en premier sur Blog dbi services.

Solving a John Conway puzzle with SQL

A cool little conundrum came across my email Inbox this week which I thought I’d share. Back in 2016, Pizza Hut ran a promotional competition with famous mathematician John Conway on Pi day. Sadly John Conway passed away this year from COVID19 – another great mind lost to the pandemic Sad smile.

His puzzle involved the overlap between pandigital and polydivisible numbers, and whilst those terms might seem a little confronting, the puzzle is very easy to articulate.

“Find a 10 digit number that uses each of the digits 0 to 9 exactly once and where the number formed by the first n digits of the number is divisible by n.”

A first pass

To help explain the challenge, we can reduce the number of digits:

Find a 1 digit number that is divisible by 1. 

Well, that’s trivial, any single digit will do. Let’s up the game a little.

Find a 2 digit number that is divisible by 2, and each digit occurs only once.

Once again, that is easy – we could choose 12 or 14 or 28 etc. Let’s move on.

Find a three digit number that is divisible by 3, and the first 2 digits are divisible by 2.

Perhaps a touch more pause for thought here, but luckily a likely guess (“123”) actually fits the bill here. Let’s move up to 4 digits.

Find a four digit number that is divisible by 4, and the first three digits are is divisible by 3, and the first 2 digits are divisible by 2.

Unfortunately “1234” doesn’t work, but a few more guesses gets me to 1236, and thus I could immediately jump to the five digit case with “12365”, because a number ending with 5 is always divisible by 5.

Wow, I’m halfway through the digits and the problem seems easy! For the sixth digit, I got lazy and enlisted a little SQL to help with my modulo calculations


SQL> select 12365*10+rownum-1 x, mod(12365*10+rownum-1,6) y
  2  from dual
  3  connect by level <= 10;

         X          Y
---------- ----------
    123650          2
    123651          3
    123652          4
    123653          5
    123654          0
    123655          1
    123656          2
    123657          3
    123658          4
    123659          5

and now I have “123654”. I’ll repeat that to move into the seventh digit…and


SQL> select 123654*10+rownum-1 x, mod(123654*10+rownum-1,7) y
  2  from dual
  3  connect by level <= 10
  4  /

         X          Y
---------- ----------
   1236540          4
   1236541          5
   1236542          6
   1236543          0
   1236544          1
   1236545          2
   1236546          3
   1236547          4
   1236548          5
   1236549          6

Ker splat! The only option is a trailing digit of “3” which has already been used. Suddenly the magnitude of the puzzle becomes apparent.

Brute force

Here is where we re-join my story at the point of an email coming in. The email had built a solution in SQL (which in itself is a credit to the author), but the email was asking if better solutions existed. Before exploring that, let us look at the SQL that was presented, which I’ll build up as below:

First get a list of digits:


SQL> select rownum from dual
  2  connect by level < 10;

    ROWNUM
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9

I need 10 sets of these digits to form a 10-digit number, but immediately we recognize that if the 10th digit must be divisible by 10, then the last digit must be zero, so if I can build a list of 9 digits that is pandigital and polydivisible, then the job is done. Thus I need 9 sets of the digits, to build a 9 digit number, but I also must build all the 8 digit numbers, the 7 digit numbers etc in order to test each of these levels of divisibility as well. The SQL now looks like this:


with digits as  
( 
  select  
      level c 
  from  
      (select 1 from dual)  
  connect by  
      level <= 9 
) 
select  
    to_number(x1.c) num1 
,   to_number(x1.c||x2.c) num2 
,   to_number(x1.c||x2.c||x3.c) num3 
,   to_number(x1.c||x2.c||x3.c||x4.c) num4 
,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c) num5 
,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c||x6.c) num6 
,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c||x6.c||x7.c) num7 
,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c||x6.c||x7.c||x8.c) num8 
,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c||x6.c||x7.c||x8.c||x9.c) num9 
from  
    digits x1 
,   digits x2 
,   digits x3  
,   digits x4  
,   digits x5  
,   digits x6  
,   digits x7  
,   digits x8  
,   digits x9  

A nine-way join is billions of rows territory and running that SQL above never came back on my machine, so we need to start pruning the candidate rows down to a more manageable size (even for the database, or more accurately, my laptop!).

Pandigital means none of the digits are allowed to repeat so we can add that into the SQL.


SQL> with digits as
  2    (
  3      select
  4          level c
  5      from
  6          (select 1 from dual)
  7      connect by
  8          level <= 9
  9    )
 10  select
 11      to_number(x1.c) num1
 12  ,   to_number(x1.c||x2.c) num2
 13  ,   to_number(x1.c||x2.c||x3.c) num3
 14  ,   to_number(x1.c||x2.c||x3.c||x4.c) num4
 15  ,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c) num5
 16  ,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c||x6.c) num6
 17  ,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c||x6.c||x7.c) num7
 18  ,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c||x6.c||x7.c||x8.c) num8
 19  ,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c||x6.c||x7.c||x8.c||x9.c) num9
 20   from
 21       digits x1
 22   ,   digits x2
 23   ,   digits x3
 24   ,   digits x4
 25   ,   digits x5
 26   ,   digits x6
 27   ,   digits x7
 28   ,   digits x8
 29   ,   digits x9
 30   where
 31       x2.c <> x1.c
 32   and x3.c <> x1.c and x3.c <> x2.c
 33   and x4.c <> x1.c and x4.c <> x2.c and x4.c <> x3.c
 34   and x5.c <> x1.c and x5.c <> x2.c and x5.c <> x3.c and x5.c <> x4.c
 35   and x6.c <> x1.c and x6.c <> x2.c and x6.c <> x3.c and x6.c <> x4.c and x6.c <> x5.c
 36   and x7.c <> x1.c and x7.c <> x2.c and x7.c <> x3.c and x7.c <> x4.c and x7.c <> x5.c and x7.c <> x6.c
 37   and x8.c <> x1.c and x8.c <> x2.c and x8.c <> x3.c and x8.c <> x4.c and x8.c <> x5.c and x8.c <> x6.c and x8.c <> x7.c
 38   and x9.c <> x1.c and x9.c <> x2.c and x9.c <> x3.c and x9.c <> x4.c and x9.c <> x5.c and x9.c <> x6.c and x9.c <> x7.c and x9.c <> x8.c

Temporarily I’ll replace the string concatenation with a COUNT to see what the candidate set now looks like


SQL>       with digits as
  2        (
  3          select
  4              level c
  5          from
  6              (select 1 from dual)
  7          connect by
  8              level <= 9
  9        )
 10        select  count(*)
 11        from
 12            digits x1
 13        ,   digits x2
 14        ,   digits x3
 15        ,   digits x4
 16        ,   digits x5
 17        ,   digits x6
 18        ,   digits x7
 19        ,   digits x8
 20        ,   digits x9
 21        where
 22            x2.c <> x1.c
 23        and x3.c <> x1.c and x3.c <> x2.c
 24        and x4.c <> x1.c and x4.c <> x2.c and x4.c <> x3.c
 25        and x5.c <> x1.c and x5.c <> x2.c and x5.c <> x3.c and x5.c <> x4.c
 26        and x6.c <> x1.c and x6.c <> x2.c and x6.c <> x3.c and x6.c <> x4.c and x6.c <> x5.c
 27        and x7.c <> x1.c and x7.c <> x2.c and x7.c <> x3.c and x7.c <> x4.c and x7.c <> x5.c and x7.c <> x6.c
 28        and x8.c <> x1.c and x8.c <> x2.c and x8.c <> x3.c and x8.c <> x4.c and x8.c <> x5.c and x8.c <> x6.c and x8.c <> x7.c
 29        and x9.c <> x1.c and x9.c <> x2.c and x9.c <> x3.c and x9.c <> x4.c and x9.c <> x5.c and x9.c <> x6.c and x9.c <> x7.c and x9.c <> x8.c
 30  /

  COUNT(*)
----------
    362880

We’re down from a billion to 360 thousand which is much more palatable for my laptop, but definitely still not in the realm of manual inspection to find the answer. We can finally add in our polydivisibility check and we arrive at the anwer


SQL> select
  2      to_char(num9)||'0' answer
  3  from
  4      (
  5        with digits as
  6        (
  7          select
  8              level c
  9          from
 10              (select 1 from dual)
 11          connect by
 12              level <= 9
 13        )
 14        select
 15            to_number(x1.c) num1
 16        ,   to_number(x1.c||x2.c) num2
 17        ,   to_number(x1.c||x2.c||x3.c) num3
 18        ,   to_number(x1.c||x2.c||x3.c||x4.c) num4
 19        ,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c) num5
 20        ,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c||x6.c) num6
 21        ,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c||x6.c||x7.c) num7
 22        ,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c||x6.c||x7.c||x8.c) num8
 23        ,   to_number(x1.c||x2.c||x3.c||x4.c||x5.c||x6.c||x7.c||x8.c||x9.c) num9
 24        from
 25            digits x1
 26        ,   digits x2
 27        ,   digits x3
 28        ,   digits x4
 29        ,   digits x5
 30        ,   digits x6
 31        ,   digits x7
 32        ,   digits x8
 33        ,   digits x9
 34        where
 35            x2.c <> x1.c
 36        and x3.c <> x1.c and x3.c <> x2.c
 37        and x4.c <> x1.c and x4.c <> x2.c and x4.c <> x3.c
 38        and x5.c <> x1.c and x5.c <> x2.c and x5.c <> x3.c and x5.c <> x4.c
 39        and x6.c <> x1.c and x6.c <> x2.c and x6.c <> x3.c and x6.c <> x4.c and x6.c <> x5.c
 40        and x7.c <> x1.c and x7.c <> x2.c and x7.c <> x3.c and x7.c <> x4.c and x7.c <> x5.c and x7.c <> x6.c
 41        and x8.c <> x1.c and x8.c <> x2.c and x8.c <> x3.c and x8.c <> x4.c and x8.c <> x5.c and x8.c <> x6.c and x8.c <> x7.c
 42        and x9.c <> x1.c and x9.c <> x2.c and x9.c <> x3.c and x9.c <> x4.c and x9.c <> x5.c and x9.c <> x6.c and x9.c <> x7.c and x9.c <> x8.c
 43      )
 44  where
 45      mod(num1,1) = 0
 46  and mod(num2,2) = 0
 47  and mod(num3,3) = 0
 48  and mod(num4,4) = 0
 49  and mod(num5,5) = 0
 50  and mod(num6,6) = 0
 51  and mod(num7,7) = 0
 52  and mod(num8,8) = 0
 53  and mod(num9,9) = 0
 54  /

ANSWER
-----------------------------------------
3816547290

Finding improvements

We now have a SQL that solves Conway’s puzzle, but it is a weighty tome hence the request asking for “cleaner” solutions. Just like the original solution, I figured I would start with a list of digits and work from there.


SQL> select rownum from dual
  2  connect by level < 10;

    ROWNUM
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9

Simply due to years of telling people not to treat numbers as strings Smile, my plan to get the next set of digits to not use concatenation but derive the next set of digits to be that list times 10, joined to itself, because multiplying by 10 will “shift” a digit to the left. The term “itself” was lightbulb moment for me, namely, we can use a recursive SQL to generate our digits rather than a join.  I’ll build it up through the powers of 10:


SQL> with digit(pos,val) as
  2  ( select 1 pos,
  3           rownum val
  4    from   dual
  5    connect by level < 10
  6    union all
  7    select digit.pos+1,
  8           10*digit.val+(x-1)
  9    from digit, ( select level x
10                  from dual
11                  connect by level <= 10)
12    where digit.pos < 2
13  )
14  select *
15  from digit;

       POS        VAL
---------- ----------
         1          1
         1          2
         1          3
         1          4
         1          5
         1          6
         1          7
         1          8
         1          9
         2         10
         2         11
         2         12
         2         13
     ...
         2         96
         2         97
         2         98
         2         99

99 rows selected.

Line 12 being less than 2, gives me the 1 and 2 digit numbers. Bumping up line 12 to 3, and I get the 1,2 and 3 digit numbers:


SQL> with digit(pos,val) as
  2  ( select 1 pos,
  3           rownum val
  4    from   dual
  5    connect by level < 10
  6    union all
  7    select digit.pos+1,
  8           10*digit.val+(x-1)
  9    from digit, ( select level x
10                  from dual
11                  connect by level <= 10)
12    where digit.pos < 3
13  )
14  select *
15  from digit;

       POS        VAL
---------- ----------
        1          1
         1          2
         1          3
         1          4
         1          5
         1          6
         1          7
         1          8
         1          9
         2         10
         2         11
         2         12
       ...
         2         96
         2         97
         2         98
         2         99
         3        100
         3        101
         3        102
       ...
         3        997
         3        998
         3        999

999 rows selected.

And so forth.  Thus to get all of the 9 digit numbers (and then append a 0 to each one for the tenth digit which we know to be zero) we just need this:


with digit(pos,val) as
( select 1 pos,
         rownum val
  from   dual
  connect by level < 10 
  union all
  select digit.pos+1,
         10*digit.val+(x-1)
  from digit, ( select level x 
                from dual 
                connect by level <= 10)
  where digit.pos < 9
)
select val*10
from digit
where pos = 9;

At this point, whilst the SQL is leaner, it still going to spit out a billion rows so the task now becomes folding our rules into the recursive definition. However, because within each level (depth) of the recursion, we have access to the digit position, ie, the “first”, the “second” etc, we can fold our MODULO arithmetic right there into the recursive definition to eliminate candidates at each level before descending into the next level.


SQL> with digit(pos,val) as
  2  ( select 1 pos,
  3           rownum val
  4    from   dual
  5    connect by level < 10
  6    union all
  7    select digit.pos+1,
  8           10*digit.val+(x-1)
  9    from digit, ( select level x
10                  from dual
11                  connect by level <= 10)
12    where digit.pos < 9
13    and   mod(10*digit.val+(x-1),digit.pos+1) = 0
14  )
15  select val*10 potential
16  from digit
17  where pos=9;

POTENTIAL
----------
1020005640
1020061620
1020068010
1020068820
...
...
9872527230
9872583210
9876006450
9876062430
9876069630
9876545640

2492 rows selected.

This immediately drops us from billions of rows to 2492 candidates. All that remained is to ensure now that no digit was duplicated. Because we are building up the numbers from the individual digits, as we recurse we can add up each of the individual digits to get a sum of digits.  For example, just with 3 digits we get:


SQL> with digit(pos,val,sum_of_digit) as
  2  ( select 1 pos,
  3           rownum val,
  4           rownum sum_of_digit
  5    from dual
  6    connect by level < 10
  7    union all
  8    select digit.pos+1,
  9           10*digit.val+(x-1),
10           digit.sum_of_digit+x-1
11    from digit, ( select level x
12                  from dual
13                  connect by level <= 10)
14    where digit.pos < 3
15  )
16  select *
17  from digit
18  /

       POS        VAL SUM_OF_DIGIT
---------- ---------- ------------
         1          1            1
         1          2            2
         1          3            3
...
         2         43            7
         2         44            8
         2         45            9
         2         46           10
         2         47           11
         2         92           11
         2         93           12
         2         94           13
         2         95           14
         2         96           15
...
         3        770           14
         3        771           15
         3        772           16
         3        773           17
         3        774           18
         3        775           19
         3        776           20
         3        777           21
         3        778           22

This is useful because we know that for no repeated digits, the sum of digits must be (1+2+3+…+9=45), which gets us down to 672 candidates:


SQL> with digit(pos,val,sum_of_digit) as
  2  ( select 1 pos,
  3           rownum val,
  4           rownum sum_of_digit
  5    from dual
  6    connect by level < 10
  7    union all
  8    select digit.pos+1,
  9           10*digit.val+(x-1),
10           digit.sum_of_digit+x-1
11    from digit, ( select level x
12                  from dual
13                  connect by level <= 10)
14    where digit.pos < 9
15    and   mod(10*digit.val+(x-1),digit.pos+1) = 0
16  )
17  select val*10 potential
18  from digit
19  where pos=9
20  and  sum_of_digit = 45;

POTENTIAL
----------
1088528850
1088584830
1232588880
...
9872520840
9872527230
9872583210
9876006450
9876062430

672 rows selected.

All that is left now is to ensure there are no repeated digits. To do that, within the recursive layer I have two things:

  • all the digits so far in a number, plus
  • the incoming number to be appended.

So I can just compare the incoming digit with the existing list of them to make sure there is not a duplicate. Having been on my high horse about not using numbers as strings, I decide the best way to do this was with some simple INSTR Smile


SQL> with digit(pos,val,sum_of_digit) as
  2  ( select 1 pos,
  3           rownum val,
  4           rownum sum_of_digit
  5    from dual
  6    connect by level < 10
  7    union all
  8    select digit.pos+1,
  9           10*digit.val+(x-1),
10           digit.sum_of_digit+x-1
11    from digit, ( select level x
12                  from dual
13                  connect by level <= 10)
14    where digit.pos < 9
15    and   mod(10*digit.val+(x-1),digit.pos+1) = 0
16    and   instr(to_char(digit.val),to_char(x-1))=0
17  )
18  select val*10 answer
19  from digit
20  where pos=9
21  and  sum_of_digit = 45;

    ANSWER
----------
3816547290

We’re done! Another example of just cool the SQL language is.

RIP John Conway.