There was some pretty interesting feedback on yesterday’s post, so I thought I would mention it in a follow up post, so it doesn’t get lost in the wasteland of blog comments.
Remember, I wasn’t saying certain types of tweets were necessarily good or bad. I was talking about how *I* rate them as far as content production and how they *might* be rated by an evangelism program…
So I’m still of a mind that Twitter is useful, but shouldn’t be the basis of your community contribution if you are hoping to join an evangelism program.
Update: I’ve tried to emphasize it a number of times, but I think it’s still getting lost in the mix. This is not about Twitter=good/bad. It’s about the value you as an individual are adding by tweeting other people’s content, as opposed to creating good content yourself. All community participation is good, but just tweeting other people’s content is less worthy of attention *in my opinion*, than producing original content.
If someone asked the question, “What do I need to do to become an Oracle ACE?”, would you advise them to tweet like crazy, or produce some original content? I think that is the crux of the argument.
Of course, it’s just my opinion. I could be wrong.
Another thing that came out of my conversation with Zahid Anwar at OOW15, was about owning your content.
If your intention is to make a name for yourself in the community, it’s important you think about your “brand”. Most of us old-timers didn’t have to worry about this, and sometimes get a bit snooty about the idea of it, but we started early, so it was relatively easy to get noticed. For new people on the scene, it’s a much harder proposition.
It’s possible to write content on sites like Facebook, Google+ and LinkedIn, but I’m not sure that’s the best way to promote “your brand”. In some communities it might be the perfect solution, but in others I think you are in danger of becoming a faceless contributor to their brand.
In my opinion, it would be better to start a blog or website, then post links to your content to the other resources as part of promoting yourself. That way, you remain the owner of the content and it helps promote your brand.
I’ve said similar stuff to this in my Writing Tips series.
(To understand the title, see this Wikipedia entry)
The title could also be: “Do as I say, don’t do as I do”, because I want to remind you of an error that I regularly commit in my demonstrations. Here’s an example:
SQL> create table t (n number); Table created
Have you spotted the error yet ? Perhaps this will help:
SQL> insert into t select 1 - 1/3 * 3 from dual; 1 row created. SQL> insert into t select 1 - 3 * 1/3 from dual; 1 row created. SQL> column n format 9.99999999999999999999999999999999999999999 SQL> select * from t; N -------------------------------------------- .00000000000000000000000000000000000000010 .00000000000000000000000000000000000000000 2 rows selected.
Spotted the error yet ? If not then perhaps this will help:
SQL> select * from dual where 3 * 1/3 = 1/3 * 3; no rows selected SQL> select * from dual where 3 * (1/3) = (1/3) * 3; D - X 1 row selected.
Computers work in binary, people (tend to) work in decimal. 10 = 2 * 5, and 5 (more precisely, dividing by 5) is something that a computer cannot do accurately. So when you do arbitrary arithmetic you should use some method to deal with tiny rounding errors.
In Oracle this means you ought to define all numbers with a precision and scale. Look on it as another form of constraint that helps to ensure the correctness of your data as well as improving performance and reducing wasted storage space.
Sometimes if you’ve been building data pump jobs via PL/SQL, you might get some part of it incorrect, and thus the job is left in the state of “DEFINING”, ie, you are building it but never managed to complete the process. An interesting anomaly is that when this happens, your current session struggles to clean things up, ie,
SQL> select owner_name, job_name, state 2 from dba_datapump_jobs; OWNER_NAME JOB_NAME STATE ----------------------------- ---------------------- ----------------------------- SCOTT EMP_EXP DEFINING SQL> declare 2 l_datapump_handle NUMBER; 3 l_datapump_dir VARCHAR2(20) := 'DATAPUMP_DIR'; 4 begin 5 l_datapump_handle := dbms_datapump.attach(job_name => 'EMP_EXP'); 6 dbms_datapump.detach(handle => l_datapump_handle); 7 8 END; 9 / declare * ERROR at line 1: ORA-31626: job does not exist ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 1470 ORA-06512: at "SYS.DBMS_DATAPUMP", line 4455 ORA-06512: at line 5
At which point you’re thinking – what’s wrong ? I can see my job in the dictionary, why can’t I access it ? The first thing you should try is a new session.
SQL> conn scott/tiger Connected. SQL> declare 2 l_datapump_handle NUMBER; 3 l_datapump_dir VARCHAR2(20) := 'DATAPUMP_DIR'; 4 begin 5 l_datapump_handle := dbms_datapump.attach(job_name => 'EMP_EXP'); 6 dbms_datapump.detach(handle => l_datapump_handle); 7 8 END; 9 / PL/SQL procedure successfully completed.
So I’m about to depart for one day in San Antonio, Tx. to present at the International Performance and Capacity Conference, but I wanted to try to get another blog post out, this time regarding my survival at the 2015 Oracle Open World.
I’m not sure if it just that the schedule was a month later than usual or that it was just more demanding this year, but I left on Friday, feeling utterly exhausted and judging by the tweets and posts from others on social media, I don’t think I was the only one! I always regret what I didn’t get to do or those I didn’t get to meet, but this year, there were even events that I was supposed to attend that I wasn’t able to! There were just too many demands on my time- I’d get up at 5-6am each morning and collapse in bed at 11pm to midnight each night thinking about what I still had to get done!
Some of my favorite times were at the demo booths. I really enjoyed talking to everyone and getting the chance to introduce the attendees to Enterprise Manager 13c was a joy. The great new interface and features were impossible not to get excited about!
I’ve always tried to get everything prepared before I leave for the event and outside of anyone’s control, my poor co-presenter for Oak Table World, Stewart Bryson, had to go in for emergency surgery on his knee. Rather than do my well-known AWR Warehouse presentation sans the great OBIEE tier from Stewart, I chose to build out something new. My mistake- I chose to decide on the topic after drinking out with Alex Gorbachev on Friday evening. I really should know better than this, but instead attempted to build a Raspberry Pi project with missing hardware until I finally had to admit to Alex that I’d coded my way into a hardware corner without the time or resources to get out of. Needless to say, I found myself without slides or a technical topic on Sunday for my presentation Tuesday morning.
I ended up having to skip the bridge run, (in my arthritis case, walk) and got down to building out a new project after seeing the stuffed bears in the Oracle retail store. I knew I wanted to inspire the attendees to get involved with the younger generation, but what could I quickly build out without too much demands on my time that would do that? After purchasing one of the bears, I went over to Central Computers and added a PiCamera to my already purchased Raspberry Pi investment.
Thanks to some jumper wires from Mark Vilrokx and, a few misc. buttons, safety pins and such from stores downtown, I was ready to start operating on the bear. The amount of odd comments as I sat in the OTN area, cutting open the bear and pulling the stuffing out to insert the Raspberry Pi was pretty humorous. I had to warn my husband as he went into our room much later in the day, that no, I was not building a bomb and to hide the hardware from the poor maids before they freaked out upon cleaning our room!
The end product was pretty cool though- The RPI_OracleBear is a picture taking, tweeting stuffed bear when you pushed the button on his paw! You can check him out on Twitter, as he was a guest to a couple of the parties and locations throughout the conference week. The session at OTW went well and I was very happy with the response, especially considering I’d just put it together the last 48 hours between booth duty and other demands. The worst part was that in the middle of coding the python piece for the project, I lost track of time and missed the Enterprise Manager SIG meeting on Monday evening! This is where hyper-focusing is more of a liability than a gift.
I was able to make all my other commitments throughout the week, including publisher meetings representing my group, meeting with many customers and getting the word out about Oracle Management Cloud and even demoing Enterprise Manager 13c! I really noticed just how busy I was when I’d finished having a lunch debriefing with Oracle Education Foundation on Thursday afternoon and the realization hit me that I didn’t have anything to do for the next three hours. I’d been running non-stop for so many days that I just didn’t know how to handle any downtime, but ended up going back to my room to mail out RPI Oracle Bear, (no one wants to know what would have happened if I’d tried to take him through TSA scanners…) and packed my bags to gladly go home the next morning. All and all, it was a great year, but looking forward to the event next year to be a bit earlier than the end of October!
Things we learned this year:
Last but not least- Kudos to Oracle not just for putting on a great conference, but of all things, the backpack attendee gift this year. I know it sounds strange, but I give away these bags or turn them down. This year, I talked my husband out of his in less than a day and will be using it on my trips for quite some time. Its compact, attractive and stays on my small shoulders. I expect conferences to cater to the majority of attendees and let’s be honest, it’s men, but this bag is something that both men and women found very functional and appealing. Great job on a small but cool detail for the attendees.
During a conversation with Zahid Anwar at OOW15, the question was asked, is Twitter content a valuable contribution to the community?
The following is *my opinion* on the matter. Other opinions are valid.
The sort of tweets I see fall into the following basic categories:
If you are trying to get on to a community program, like the Oracle ACE Program, *I would* rate twitter contributions quite low. I would focus on stuff where you are providing original content (blogging, whitepapers, books, YouTube etc) or directly helping people, like forums or presenting. Short-form social media is a nice addition, but it’s value is rather limited in my opinion.
Remember, it’s just my opinion, but I’m interested to know your thoughts.
Update: I think it’s worth clarifying my point some more. I don’t have a problem with any of these types of tweets. I do them all to a greater or lesser extent. The point I’m trying to make (badly), is the content that is pointed to is the “high value” in my opinion. The “pointer” (tweet) is of far less value. If someone came to me and said, “I tweet a lot about other people’s content, can I join your community program (if I had one), I would probably say no and encourage them to produce their own content. That was the context of the conversation that initiated this post.
APEX 5.0.2 was released just before OOW15. Today is my first day back to work, so I’ve started to patch some stuff. We were already on APEX 5.0.1 across the board, so we didn’t need to do any full installations, just patches.
SO far, so good. No problems in any Dev or Test databases. I expect a pretty quick roll-out across the board.
MobaXterm 8.3 has been released.
This is a must for Windows users who use SSH and X Emulation!
I had a recent conversation at Oracle OpenWorld 2015 about a locking anomaly in a 3-node RAC system which was causing unexpected deadlocks. Coincidentally, this conversation came about shortly after I had been listening to Martin Widlake talking about using the procedure dbms_stats.set_table_prefs() to adjust the way that Oracle calculates the clustering_factor for indexes. The juxtaposition of these two topics made me realise that the advice I had given in “Cost Based Oracle – Fundamentals” 10 years ago was (probably) incomplete, and needed some verification. The sticking point was RAC.
In my original comments about setting the “table_cached_blocks” preference (as it is now known) I has pointed out that the effect of ASSM (with its bitmap space management blocks) was to introduce a small amount of random scattering as rows were inserted by concurrent sessions and this would adversely affect the clustering_factor of any indexes on the table, so a reasonable default value for the table_cached_blocks parameter would be 16.
I had overlooked the fact that in RAC each instance tries to acquire ownership of its own level 1 (L1) bitmap block in an attempt to minimise the amount of global cache contention. If each instance uses a different L1 bitmap block to allocate data blocks then (for tables and their partitions) they won’t be using the same data blocks for inserts, and they won’t even have to pass the bitmap blocks between instances when searching for free space. The consequence of this, though, is that if N separate instances are inserting data into a single table there are typically 16 * N different blocks into which sessions could be inserting concurrently, so the “most recent” data could be scattered across 16N blocks, which means the appropriate value table_cached_blocks is 16N.
To demonstrate the effect of RAC and multiple L1 blocks, here’s a little demonstration code from a 12c RAC database with 3 active instances.
create tablespace test_8k_assm_auto datafile size 67108864 logging online permanent blocksize 8192 extent management local autoallocate default nocompress segment space management auto ; create table t1 (n1 number, c1 char(1000)) storage (initial 8M next 8M);
The code above simply creates a tablespace using locally managed extents with system allocated extent sizes, then creates a table in that tablespace with a starting requirement of 8MB. Without this specification of initial the first few extents for the table would have been 64KB thanks to the system allocation algorithm, and that would have spoiled the demonstration because the table would have started by allocating a single extent of 64KB, with just one L1 bitmap block; slightly different effects would also have appeared with an extent size of 1MB – with 2 L1 bitmap blocks – which is the second possible extent size for system allocation.
Having created the table I connected one session to each of the three instances and inserted one row, with commit, from each instance. Then I ran a simple SQL statement to show me the file and block numbers of the rows inserted:
select dbms_rowid.rowid_relative_fno(rowid) file_no, dbms_rowid.rowid_block_number(rowid) block_no, count(*) rows_in_block from t1 group by dbms_rowid.rowid_relative_fno(rowid), dbms_rowid.rowid_block_number(rowid) order by dbms_rowid.rowid_relative_fno(rowid), dbms_rowid.rowid_block_number(rowid) ; FILE_NO BLOCK_NO ROWS_IN_BLOCK ---------- ---------- ------------- 19 518 1 19 745 1 19 2157 1
As you can see, each row has gone into a separate block – more significantly, though, those blocks are a long way apart from each other – they are in completely different sets of 16 block – each instance is working with its own L1 block (there are 16 of them to choose from in an 8MB extent), and has formatted 16 blocks associated with that L1 for its own use.
In fact this simple test highlighted an anomaly that I need to investigate further. In my first test, after inserting just 3 rows into the table I found that Oracle had formatted 288 blocks (18 groups of 16) across 2 extents, far more than seems reasonable. The effect looks hugely wasteful, but that’s mainly because I’ve implied that I have a “large” table into which I’ve then inserted very little data – nevertheless something a little odd has happened. In my second test it got worse because Oracle formatted 16 blocks on the first insert, took that up to 288 blocks on the second insert, then went up to 816 blocks (using a third extent) on the third insert; then in my third test Oracle behaved as I had assumed it ought to, formatting 3 chunks of 16 blocks each in a single extent – but that might have been because I did a truncate rather than a drop and recreate.
Whatever else is going on, the key point of this note is that if you’re trying to get Oracle to give you a better estimate for the clustering_factor in a RAC system then “16 * instance-count” is probably a good starting point for setting the table preference known as table_cached_blocks.
The anomaly of data being scattered extremely widely with more extents being allocated than you might expect is probably a boundary condition that you don’t have to worry about – until I’ve had time to look at it a little more closely.