Search

OakieTags

Who's online

There are currently 0 users and 25 guests online.

Recent comments

Oakies Blog Aggregator

The Proliferation of I.T. Voodoo…

When I say “voodoo” in this context, I’m really talking about bullshit explanations for things based on guesswork, rather than reasoned argument built using facts and investigation.

It’s really easy for voodoo explanations to proliferate when people are starved of facts. There are several ways this can happen, but a couple of them that spring to mind and really piss me off are:

  • A user reports a problem. You fix it, but don’t give a meaningful explanation of what you have done. As a result, the user is left to “make up” an explanation to tell their superiors, which then becomes part of the folklore of that department. When you fix a problem, you need to provide a technical explanation of what you have done for those that can cope with it and a more layman friendly version for those that can’t. If you don’t do this, you are starting to dig a really big hole for yourself. Users will make shit up that will haunt you forever. Next time you want to process that big change it will be blocked because, “Bob says that we always have a problem when you reboot the payroll server on a Tuesday if the parking barrier is locked in an upright position. Today is Tuesday and the parking barrier was locked in an upright position this morning, so we don’t want to risk it!” Once this shit takes hold, there is no going back!
  • A user reports a problem.  You don’t do anything, but it mysteriously “fixes” itself. You need to make sure all parties know you’ve done nothing. You also need to suggest someone actually finds the root cause of the issue, without trying to start a witch hunt. Unless you know why something has happened, people will make up bullshit explanations and they will become department folklore. etc. See previous point.

For so long I.T. has had a poor reputation where user engagement is concerned and it *always* generates more problems for us than it actually does for the users. Get with the flippin’ program!

Cheers

Tim…

PS. Can you tell I’m pissed off about something? :)


The Proliferation of I.T. Voodoo… was first posted on August 4, 2015 at 12:35 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

On the Topic of Technology… 8 – First 48 hours with Windows 10

August 3, 2015 (Updated August 5, 2015, August 10, 2015, August 29, 2015) (Back to the Previous Post in the Series) I have been testing Windows 10 under the Windows Insider Program for several months using an old Dell Precision Core 2 Extreme computer (with NVidia graphics card) that was retired from engineering service a […]

OTN Tour of Latin America 2015 : UYOUG, Uruguay – Day 1

After 15 hours of sleep I still managed to feel tired. :) I went for breakfast at 6:30, then started to feel a little weird, so I took some headache pills and headed back to bed for a hour before meeting up with Debra and Mike to head down to the venue for the first day of the UYOUG leg of the tour…

The order of events went like this:

  • Pablo Ciccarelo started with an introduction OTN and the ACE program, which was in Spanish, so I ducked out of it. :)
  • Mike Dietrich speaking about “How Oracle Single/Multitenant will change a DBA’s life”.
  • Me with “Pluggable Databases : What they will break and why you should use them anyway!” There was some crossover with Mike’s session, but we both emphasised different things, which is interesting in itself. :)
  • Debra Lilley with “PaaS4SaaS”. The talk focused on a POC for using PaaS to extend the functionality of Fusion Apps, which is a SaaS product.
  • Me with “Oracle Database Consolidation : It’s not all about Oracle database 12c”. I think this is the least technical talk I’ve ever done and that makes me rather nervous. Technical content and demos are a reassuring safety blanket for me, so having them taken away feels a bit like being naked in public (why am I now thinking of Bjoern?). The session is a beginner session, so I hope people didn’t come expecting something more than I delivered. See, I’m paranoid already!
  • Mike Dietrich on “Simple Minimal Downtime Migration to Oracle 12c using Full Transportable Export/Import”. I think I’ve used every feature discussed in this session, but I’ve never used them all together in this manner. I think I may go back to the drawing board for one of the migrations I’ve got coming up in the next few months.
  • Debra Lilley with “Are cloud apps really ready?”. There was some similarity between the message Debra was putting out here and some of the stuff I spoke about in my final talk.
  • Me with “It’s raining data! Oracle Databases in the Cloud”. This was also not a heavy technical session, but because so few people have experience of running databases in the cloud at the moment, I think it has a wider appeal, so I’m not so paranoid about the limited technical content.

So that was the first day of the UYOUG conference done. Tomorrow is an easy day for me. I’ve got a panel session in the middle of the day, then I’m done. :)

Thanks to everyone who came to my sessions. I hope you found them useful.

Having slept through yesterday’s social event, I will be going out to get some food tonight. They eat really late here, so by the time we get some food I’ll probably be thinking about bed. :)

Cheers

Tim…


OTN Tour of Latin America 2015 : UYOUG, Uruguay – Day 1 was first posted on August 3, 2015 at 10:53 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Provisioning Virtual ASM (vASM)

Create vFiles

In this post, we’re going to use Delphix to create a virtual ASM diskgroup, and provision a clone of the virtual ASM diskgroup to a target system. I call it vASM, which is pronounced “vawesome.” Let’s make it happen.

She’s always hungry. She always needs to feed. She must eat. – Gollum

Most viewers assume Gollum was talking about Shelob the giant spider here, but I have it on good authority that he was actually talking about Delphix. You see, Delphix (Data tamquam servitium in trinomial nomenclature) is the world’s most voracious datavore. Simply put, Delphix eats all the data.

Now friends of mine will tell you that I absolutely love to cook, and they actively make it a point to never talk to me about cooking because they know I’ll go on like Bubba in Forrest Gump and recite the million ways to make a slab of meat. But if there’s one thing I’ve learned from all my cooking, it’s that it’s fun to feed people who love to eat. With that in mind, I went searching for new recipes that Delphix might like and thought, “what better meal for a ravenous data muncher than an entire volume management system?”

vASM

Delphix ArchitectureIn normal use, Delphix links to an Oracle database and ingests changes over time by using RMAN “incremental as of SCN” backups, archive logs, and online redo. This creates what we call a compressed, deduped timeline (called a Timeflow) that you can provision as one or more Virtual Databases (VDBs) from any points in time.

However, Delphix has another interesting feature known as AppData, which allows you to link to and provision copies of flat files like unstructured files, scripts, software binaries, code repositories, etc. It uses rsync to build a Timeflow, and allows you to provision one or more vFiles from any points in time. But on top of that (and even cooler in my opinion), you have the ability to create “empty vFiles” which amounts to an empty directory on a system; except that the storage for the directory is served straight from Delphix. And it is this area that serves as an excellent home for ASM.

We’re going to create an ASM diskgroup using Delphix storage, and connect to it with Oracle’s dNFS protocol. Because the ASM storage lives completely on Delphix, it takes advantage of Delphix’s deduplication, compression, snapshots, and provisioning capabilities.

Some of you particularly meticulous (read: cynical) readers may wonder about running ASM over NFS, even with dNFS. I’d direct your attention to this excellent test by Yury Velikanov. Of course, your own testing is always recommended.

I built this with:

  • A Virtual Private Cloud (VPC) in Amazon Web Services
  • Redhat Enterprise Linux 6.5 Source and Target servers
    • Oracle 11.2.0.4 Grid Infrastructure
    • 11.2.0.4 Oracle Enterprise Edition
  • Delphix Engine 4.2.4.0
  • Alchemy

Making a vASM Diskgroup

Before we get started, let’s turn on dNFS while nothing is running. This is as simple as using the following commands on the GRID home:

[oracle@ip-172-31-0-61 lib]$ cd $ORACLE_HOME/rdbms/lib
[oracle@ip-172-31-0-61 lib]$ pwd
/u01/app/oracle/product/11.2.0/grid/rdbms/lib
[oracle@ip-172-31-0-61 lib]$ make -f ins_rdbms.mk dnfs_on
rm -f /u01/app/oracle/product/11.2.0/grid/lib/libodm11.so; cp /u01/app/oracle/product/11.2.0/grid/lib/libnfsodm11.so /u01/app/oracle/product/11.2.0/grid/lib/libodm11.so
[oracle@ip-172-31-0-61 lib]$

Now we can create the empty vFiles area in Delphix. This can be done through the Delphix command line interface, API, or through the GUI. It’s exceedingly simple to do, requiring only a server selection and a path.

Create vFiles
Choose Path for vASM disks
Choose Delphix Name
Set Hooks
Finalize

Let’s check our Linux source environment and see the result:

[oracle@ip-172-31-0-61 lib]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            9.8G  5.0G  4.3G  54% /
tmpfs                 7.8G   94M  7.7G   2% /dev/shm
/dev/xvdd              40G   15G   23G  39% /u01
172.31.7.233:/domain0/group-35/appdata_container-32/appdata_timeflow-44/datafile
                       76G     0   76G   0% /delphix/mnt

Now we’ll create a couple ASM disk files that we can add to an ASM diskgroup:

[oracle@ip-172-31-0-61 lib]$ cd /delphix/mnt
[oracle@ip-172-31-0-61 mnt]$ truncate --size 20G disk1
[oracle@ip-172-31-0-61 mnt]$ truncate --size 20G disk2
[oracle@ip-172-31-0-61 mnt]$ ls -ltr
total 1
-rw-r--r--. 1 oracle oinstall 21474836480 Aug  2 19:26 disk1
-rw-r--r--. 1 oracle oinstall 21474836480 Aug  2 19:26 disk2

Usually the “dd if=/dev/zero of=/path/to/file” command is used for this purpose, but I used the “truncate” command. This command quickly creates sparse files that are perfectly suitable.

And we’re ready! Time to create our first vASM diskgroup.

SQL> create diskgroup data
  2  disk '/delphix/mnt/disk*';

Diskgroup created.

SQL> select name, total_mb, free_mb from v$asm_diskgroup;

NAME				 TOTAL_MB    FREE_MB
------------------------------ ---------- ----------
DATA				    40960      40858

SQL> select filename from v$dnfs_files;

FILENAME
--------------------------------------------------------------------------------
/delphix/mnt/disk1
/delphix/mnt/disk2

The diskgroup has been created, and we verified that it is using dNFS. But creating a diskgroup is only 1/4th the battle. Let’s create a database in it. I’ll start with the simplest of pfiles, making use of OMF to get the database up quickly.

[oracle@ip-172-31-0-61 ~]$ cat init.ora
db_name=orcl
db_create_file_dest=+DATA
sga_target=4G
diagnostic_dest='/u01/app/oracle'

And create the database:

SQL> startup nomount pfile='init.ora';
ORACLE instance started.

Total System Global Area 4275781632 bytes
Fixed Size		    2260088 bytes
Variable Size		  838861704 bytes
Database Buffers	 3422552064 bytes
Redo Buffers		   12107776 bytes
SQL> create database;

Database created.

I’ve also run catalog.sql, catproc.sql, and pupbld.sql and created an SPFILE in ASM, but I’ll skip pasting those here for at least some semblance of brevity. You’re welcome. I also created a table called “TEST” that we’ll try to query after the next part.

Cloning our vASM Diskgroup

Let’s recap what we’ve done thus far:

  • Created an empty vFiles area from Delphix on our Source server
  • Created two 20GB “virtual” disk files with the truncate command
  • Created a +DATA ASM diskgroup with the disks
  • Created a database called “orcl” on the +DATA diskgroup

In sum, Delphix has eaten well. Now it’s time for Delphix to do what it does best, which is to provision virtual objects. In this case, we will snapshot the vFiles directory containing our vASM disks, and provision a clone of them to the target server. You can follow along with the gallery images below.

Snapshot vASM area
Provision vFiles
Choose the Target and vASM Path
Name the Target for vASM
Set Hooks
Finalize Target

Here’s the vASM location on the target system:

[oracle@ip-172-31-2-237 ~]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            9.8G  4.1G  5.2G  44% /
tmpfs                 7.8G   92M  7.7G   2% /dev/shm
/dev/xvdd              40G   15G   23G  39% /u01
172.31.7.233:/domain0/group-35/appdata_container-34/appdata_timeflow-46/datafile
                       76G  372M   75G   1% /delphix/mnt

Now we’re talking. Let’s bring up our vASM clone on the target system!

SQL> alter system set asm_diskstring = '/delphix/mnt/disk*';

System altered.

SQL> alter diskgroup data mount;

Diskgroup altered.

SQL> select name, total_mb, free_mb from v$asm_diskgroup;

NAME				 TOTAL_MB    FREE_MB
------------------------------ ---------- ----------
DATA				    40960      39436

But of course, we can’t stop there. Let’s crack it open and access the tasty “orcl” database locked inside. I copied over the “initorcl.ora” file from my source so it knows where to find the SPFILE in ASM. Let’s start it up and verify.

SQL> startup;
ORACLE instance started.

Total System Global Area 4275781632 bytes
Fixed Size		    2260088 bytes
Variable Size		  838861704 bytes
Database Buffers	 3422552064 bytes
Redo Buffers		   12107776 bytes
Database mounted.
Database opened.
SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DATA/orcl/datafile/system.259.886707477
+DATA/orcl/datafile/sysaux.260.886707481
+DATA/orcl/datafile/sys_undots.261.886707485

SQL> select * from test;

MESSAGE
--------------------------------------------------------------------------------
WE DID IT!

As you can see, the database came online, the datafiles are located on our virtual ASM diskgroup, and the table I created prior to the clone operation came over with the database inside of ASM. I declare this recipe a resounding success.

Conclusion

A lot happened here. Such is the case with a good recipe. But in the end, my actions were deceptively simple:

  • Create a vFiles area
  • Create disk files and an ASM diskgroup inside of Delphix vFiles
  • Create an Oracle database inside the ASM diskgroup
  • Clone the Delphix vFiles to a target server
  • Bring up vASM and the Oracle database on the target server

With this capability, it’s possible to do some pretty incredible things. We can provision multiple copies of one or more vASM diskgroups to as many systems as we please. What’s more, we can use Delphix’s data controls to rewind vASM diskgroups, refresh them from their source diskgroups, and even create vASM diskgroups from cloned vASM diskgroups. Delphix can also replicate vASM to other Delphix engines so you can provision in other datacenters or cloud platforms. And did I mention it works with RAC? vFiles can be mounted on multiple systems, a feature we use for multi-tier EBS provisioning projects.

But perhaps the best feature is that you can use Delphix’s vASM disks as a failgroup to a production ASM diskgroup. That means that your physical ASM diskgroups (using normal or high redundancy) can be mirrored via Oracle’s built in rebalancing to a vASM failgroup comprised of virtual disks from Delphix. In the event of a disk loss on your source environment, vASM will protect the diskgroup. And you can still provision a copy of the vASM diskgroup to another system and force mount for the same effect we saw earlier.

There is plenty more to play with and discover here. But we’ll save that for dessert. Delphix is hungry again.

The post Provisioning Virtual ASM (vASM) appeared first on Oracle Alchemist.

Demo data

One of the articles I wrote for redgate’s AllthingsOracle site some time ago included a listing of the data distribution for some client data which I had camouflaged. A recent comment on the article asked how I had generated the data – of course the answer was that I hadn’t generated it, but I had done something to take advantage of its existence without revealing the actual values.  This article is just a little note showing what I did; it’s not intended as an elegant and stylish display of perfectly optimised SQL, it’s an example of a quick and dirty one-off  hack that wasn’t (in my case) a disaster to run.

I’ve based the demonstration on the view all_objects. We start with a simple query showing the distribution of the values of column object_type:


break on report
compute sum of count(*) on report

select
        object_type, count(*)
from
        all_objects
group by object_type
order by
        count(*) desc
;

OBJECT_TYPE           COUNT(*)
------------------- ----------
SYNONYM                  30889
JAVA CLASS               26447
...
JAVA RESOURCE              865
TRIGGER                    509
JAVA DATA                  312
...
JAVA SOURCE                  2
DESTINATION                  2
LOB PARTITION                1
EDITION                      1
MATERIALIZED VIEW            1
RULE                         1
                    ----------
sum                      76085

44 rows selected.

Starting from this data set I want 44 randomly generated strings and an easy way to translate the actual object type into one of those strings. There are various ways to do this but the code I hacked out put the original query into an inline view, surrounded it with a query that added a rownum to the result set to give each row a unique id, then used the well-known and much-loved  “connect by level” query against  dual to generate a numbered list of randomly generated strings as an inline view that I could use in a join to do the translation.


execute dbms_random.seed(0)

column random_string format a6

select
        generator.id,
        dbms_random.string('U',6)       random_string,
        sum_view.specifier,
        sum_view.ct                     "COUNT(*)"
from
        (
        select
                rownum  id
                from    dual
                connect by
                        level <= 100
        )       generator,
        (
        select
                rownum          id,
                specifier,
                ct
        from
                (
                select
                        object_type specifier, count(*) ct
                from
                        all_objects
                group by
                        object_type
                order by
                        count(*) desc
                )
        )       sum_view
where
        sum_view.id = generator.id
order by
        ct desc
;

        ID RANDOM SPECIFIER             COUNT(*)
---------- ------ ------------------- ----------
         1 BVGFJB SYNONYM                  30889
         2 LYYVLH JAVA CLASS               26447
...
         9 DNRYKC JAVA RESOURCE              865
        10 BEWPEQ TRIGGER                    509
        11 UMVYVP JAVA DATA                  312
...
        39 EYYFUJ JAVA SOURCE                  2
        40 SVWKRC DESTINATION                  2
        41 CFKBRX LOB PARTITION                1
        42 ZWVEVH EDITION                      1
        43 DDAHZX MATERIALIZED VIEW            1
        44 HFWZBX RULE                         1
                                      ----------
sum                                        76085

44 rows selected.

I’ve selected the id and original value here to show the correspondance, but didn’t need to show them in the original posting. I’ve also left the original (now redundant) “order by” clause in the main inline view, and you’ll notice that even though I needed only 44 distinct strings for the instance I produced the results on I generated 100 values as a safety margin for testing the code on a couple of other versions of Oracle.

A quick check for efficiency – a brief glance at the execution plan, which might have prompted me to add a couple of /*+ no_merge */ hints if they’d been necessary – showed that the work done was basically the work of the original query plus a tiny increment for adding the rownum and doing the “translation join”. Of course, if I’d then wanted to translate the full 76,000 row data set and save it as a table I’d have to join the result set above back to a second copy of all_objects – and it’s translating full data sets , while trying to deal with problems of referential integrity and correlation, where the time disappears when masking data.

It is a minor detail of this code that it produced fixed length strings (which matched the structure of the original client data). Had I felt the urge I might have used something like: dbms_random.string(‘U’,trunc(dbms_random.value(4,21))) to give me a random distribution of string lengths between 4 and 20. Getting fussier I might have extracted the distinct values for object_type and then generated a random string that matched the length of the value it was due to replace. Fussier still I might have generated the right number of random strings matching the length of the longest value, sorted the original and random values into alphabetical order to align them, then trimmed each random value to the length of the corresponding original value.

It’s extraordinary how complicated it can be to mask data realistically – even when you’re looking at just one column in one table. And here’s a related thought – if an important type of predicate in the original application with the original data is where object_type like ‘PACK%’ how do you ensure that your masked data is consistent with the data that would be returned by this query and how do you determine the value to use instead of “PACK” as the critical input when you run the critial queries against the masked data ? (Being privileged may give you part of the answer, but bear in mind that the people doing the testing with that data shouldn’t be able to see the unmasked data or any translation tables.)

 

 

 

OTN Tour of Latin America 2015 : The Journey Begins – Arrival at Montevideo, Uruguay

ace-directorAfter the quick flight to Montevideo, I was met by Edelwisse and Nelson. A couple of minutes later Mike Dietrich arrived. You know, that guy that pretends to understand upgrades! We drove over to the hotel, arriving at about 11:00. Check in was not until 15:00, so I had to wait a few minutes for them to prep my room. The others were going out to get some food, but I had a hot date with my bed. I got to my room, showered and hit the hay.

I was meant to meet up with the others at about 19:00 to get some food, but I slept through. In fact, I slept until about 04:00 the next day, which was about 15 hours. I think that may be a record… I’m feeling a bit punch-drunk now, but I’m sure once I start moving things will be fine…

Today is the first day of the tour proper. Fingers crossed…

Cheers

Tim…


OTN Tour of Latin America 2015 : The Journey Begins – Arrival at Montevideo, Uruguay was first posted on August 3, 2015 at 8:45 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

OTN Tour of Latin America 2015 : The Journey Begins – Buenos Aires Airport

ace-directorThe flight from Paris to Buenos Aires was long, but relatively uneventful. One little patch of turbulence, then plain sailing.

For the main meal they didn’t have me down as vegetarian. I don’t know why I bother ordering special meals because the vast majority of the times I don’t get them. Interestingly, they did have a vegetarian breakfast for me, probably fixed one up after the dinner issue, but they gave it to the lady 2 seats away from me. She had seen the issue with the dinner and passed it across to me. In big letters on the tray it said 27J, which was my seat number, so I’m not quite sure why it was so difficult. I honestly think a lot of people look at me and think, “There is no way he is that fat and a vegetarian!”, so they give it to someone who looks suitably skinny… :)

I watched Insurgent, which was OK, then started to watch Fast & Furious 7, but couldn’t get into it on such a small screen. Amazingly, I did manage to catch small snatches of sleep, which was very welcome, interspersed with the obligatory periods of standing at the back of the plane pretending there aren’t loads of hours of sitting left.

So now I’m in Buenos Aires airport waiting to get back on to the same plane to fly the last 25 mins to Montevideo. I will be back in Buenos Aires in a couple of days, but I will be arriving by ferry next time! :)

Cheers

Tim…


OTN Tour of Latin America 2015 : The Journey Begins – Buenos Aires Airport was first posted on August 2, 2015 at 8:27 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

OTN Tour of Latin America 2015 : The Journey Begins – CDG Airport

ace-directorI’ve been in Charles de Gaulle airport for about three hours now. Only another four to go… :)

I tried to record another technical video, but you can hear kids in the background. Now the timings are sorted, it should be pretty quick to re-record when I get to a hotel, so that’s good I guess. I’m not sure I can face doing another one today.

My YouTube channel is on 199 subscribers. About to ding to the magic 200. :)

Perhaps I should get the GoPro out and do some filming of the barren wasteland, which is the K gates in Terminal 2E.

Cheers

Tim…


OTN Tour of Latin America 2015 : The Journey Begins – CDG Airport was first posted on August 1, 2015 at 6:59 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

OTN Tour of Latin America 2015 : The Journey Begins

ace-directorI’m about to board a flight to Paris, where I will wait for 7 hours before starting my 14 hour flight to Montevideo, Uruguay. I think you can probably guess how I’m feeling at this moment…

Why won’t someone hurry up and invent a teleport device?

I will probably put out little posts like this along the way, just so friends and family know what is going on. It’s wrong to wish your life away, but I’m really not looking forward to the next 20+ hours…

Hopefully I will get power in Paris, so I can do some stuff on my laptop…

Cheers

Tim…


OTN Tour of Latin America 2015 : The Journey Begins was first posted on August 1, 2015 at 1:45 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

FASTSYNC Redo Transport for Data Guard in #Oracle 12c

FASTSYNC is a new LogXptMode for Data Guard in 12c. It enables Maximum Availability protection mode at larger distances with less performance impact than LogXptMode SYNC has had before. The old SYNC behavior looks like this:

LogXptMode=SYNC

LogXptMode=SYNC

The point is that we need to wait for two acknowledgements by RFS (got it & wrote it) before we can write the redo entry locally and get the transaction committed. This may slow down the speed of transactions on the Primary, especially with long distances. Now to the new feature:

LogXptMode=FASTSYNC

LogXptMode=FASTSYNC

Here, we wait only for the first acknowledgement (got it) by RFS before we can write locally. There is still a possible performance impact with large distances here, but it is less than before. This is how it looks implemented:

DGMGRL> show configuration;   

Configuration - myconf

  Protection Mode: MaxAvailability
  Members:
  prima - Primary database
    physt - (*) Physical standby database 

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 26 seconds ago)

DGMGRL> show database physt logxptmode
  LogXptMode = 'fastsync'
DGMGRL> exit
[oracle@uhesse ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sat Aug 1 10:41:27 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="physt", SYNC NOAFFIRM
						  delay=0 optional compression=
						 disable max_failure=0 max_conn
						 ections=1 reopen=300 db_unique
						 _name="physt" net_timeout=30,
						 valid_for=(online_logfile,all_
						 roles)

My configuration uses Fast-Start Failover, just to show that this is no restriction. Possible but not required is the usage of FASTSYNC together with Far Sync Instances. You can’t have Maximum Protection with FASTSYNC, though:

DGMGRL> disable fast_start failover;
Disabled.
DGMGRL> edit configuration set protection mode as maxprotection;
Error: ORA-16627: operation disallowed since no standby databases would remain to support protection mode

Failed.
DGMGRL> edit database physt set property logxptmode=sync;
Property "logxptmode" updated
DGMGRL> edit configuration set protection mode as maxprotection;
Succeeded.

Addendum: As my dear colleague Joel Goodman pointed out, the name of the process that does the Redo Transport from Primary to Standby has changed from LNS to NSS (for synchronous Redo Transport):

SQL> select name,description from v$bgprocess where paddr<>'00';

NAME  DESCRIPTION
----- ----------------------------------------------------------------
PMON  process cleanup
VKTM  Virtual Keeper of TiMe process
GEN0  generic0
DIAG  diagnosibility process
DBRM  DataBase Resource Manager
VKRM  Virtual sKeduler for Resource Manager
PSP0  process spawner 0
DIA0  diagnosibility process 0
MMAN  Memory Manager
DBW0  db writer process 0
MRP0  Managed Standby Recovery
TMON  Transport Monitor
ARC0  Archival Process 0
ARC1  Archival Process 1
ARC2  Archival Process 2
ARC3  Archival Process 3
ARC4  Archival Process 4
NSS2  Redo transport NSS2
LGWR  Redo etc.
CKPT  checkpoint
RVWR  Recovery Writer
SMON  System Monitor Process
SMCO  Space Manager Process
RECO  distributed recovery
LREG  Listener Registration
CJQ0  Job Queue Coordinator
PXMN  PX Monitor
AQPC  AQ Process Coord
DMON  DG Broker Monitor Process
RSM0  Data Guard Broker Resource Guard Process 0
NSV1  Data Guard Broker NetSlave Process 1
INSV  Data Guard Broker INstance SlaVe Process
FSFP  Data Guard Broker FSFO Pinger
MMON  Manageability Monitor Process
MMNL  Manageability Monitor Process 2

35 rows selected.

I’m not quite sure, but I think that was even in 11gR2 already the case. Just kept the old name in sketches as a habit :-)

Tagged: 12c New Features, Data Guard