Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

OpenWorld Monday…I’m so screwed

As I mentioned in a previous blog post, I whipped up a small Application Express application to let me plan out my Openworld activities. But there’s a small problem with being able to quickly and easily find thing based on the schedule…. There is too much to see!!!

I had a quick glance through the schedule just for Monday, and already I can see myself being very frustrated with all of the content that I am going to have miss out on, simply because it clashes with other content I’d like to see.

And just to compound things Smile with any luck, I’ll be able to score some time in the theatre in the demo grounds to give some lightning talks in both the morning and afternoon. More details on that soon!

But here is my wish list below – just smashed full of conflicts… sigh Smile

Monday morning

The New Auto-Upgrade for Oracle Databases [TRN4031]
Daniel Overby Hansen, Database Administrator, SimCorp
Mike Dietrich, Master Product Manager – Database Upgrades and Migrations, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 3004

which overlaps with

High Availability and Sharding Deep Dive with Next-Generation Oracle Database [TRN4032]
Wei Hu, Vice President of Product Development, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 3007

which overlaps with

Oracle Linux and Oracle VM VirtualBox: The Enterprise Development Platform [PRO4724]
Simon Coter, Director of Product Management – Oracle VM & VirtualBox, Oracle
Sergio Leunissen, Vice President of Product Management, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone South – Room 152

which overlaps with

A Day in the Life of a DBA in an Autonomous World [PRO4376]
Akshay Sangaonkar, Senior Principal Product Manager, Oracle
Karl Dias, Vice President, Database Manageability, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Marriott Marquis (Golden Gate Level) – Golden Gate C3

which overlaps with

Make AppDev Fun Again, with the Oracle RAD Stack [PRO4087]
Kris Rice, Senior Director, Oracle
Michael Hichwa, Software Development VP, Oracle
Joel Kallman, Senior Director, Software Development, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 3009

which overlaps with

Using Deep Learning and Neural Networks in Oracle Database 18c [BUS1891]
Brendan Tierney, ., Oralytics
Neil Chandler, Database Administrator, Chandler Systems Ltd
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 3001

which overlaps with

Analytic Functions: A Developer’s Best Friend [DEV6244]
Timothy Hall, DBA, Developer, Author, Trainer, oracle-base.com
Code One Tracks: Database, Big Data, and Data Science
Session Type: Developer Session
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 2003

 

Monday mid-morning

Oracle Linux and Oracle VM: Get Trained for Cloud, Hybrid, and On-Premises [TRN5828]
Avi Miller, Product Management Director, Oracle
Antoinette O’SULLIVAN, Director of Training and Documentation for Linux and Virtualization, Oracle
Monday, Oct 22, 10:30 a.m. – 11:15 a.m. | Moscone South – Room 154

which overlaps with

Why Citizen Developers Should Be Your New Best Friend [TRN4091]
David Peake, Senior Principal Product Manager, Oracle
Monday, Oct 22, 10:30 a.m. – 11:15 a.m. | Moscone West – Room 3009

which overlaps with

Oracle Autonomous Database Cloud [PKN3947]
Andrew Mendelsohn, Executive Vice President Database Server Technologies, Oracle
Monday, Oct 22, 11:00 a.m. – 12:15 p.m. | The Exchange @ Moscone South – The Arena

which overlaps with

3D: Docker for Database Developers [TIP1247]
Roel Hartman, Director, APEX Consulting
Monday, Oct 22, 11:30 a.m. – 12:15 p.m. | Moscone West – Room 3020

which overlaps with

Oracle Linux: State of the Penguin [PRO4720]
Wim Coekaerts, Senior Vice President, Operating Systems and Virtualization Engineering, Oracle
Monday, Oct 22, 11:30 a.m. – 12:15 p.m. | Moscone West – Room 2000

which overlaps with

50 Shades of Data: How, When, Why—Big, Relational, NoSQL, Elastic, Graph, Event [DEV4976]
Lucas Jellema, CTO, AMIS Services BV
Code One Tracks: Database, Big Data, and Data Science, Java Server-Side Development and Microservices
Session Type: Developer Session
Monday, Oct 22, 10:30 a.m. – 11:15 a.m. | Moscone West – Room 2007

which overlaps with

Microservices: Get Rid of Your DBA and Send the DB into Burnout [DEV5504]
Franck Pachot, Computing Engineer, CERN
Code One Tracks: Database, Big Data, and Data Science
Session Type: Developer Session
Monday, Oct 22, 10:30 a.m. – 11:15 a.m. | Moscone West – Room 2003

which overlaps with

REST-Enabled Neural Networks in Oracle Database 18c [DEV5026]
Brendan Tierney, ., Oralytics
Neil Chandler, Database Architect, Chandler Systems
Code One Tracks: Database, Big Data, and Data Science
Session Type: Developer Session
Monday, Oct 22, 11:30 a.m. – 12:15 p.m. | Moscone West – Room 2001

which overlaps with

Real-Life SQL Tuning: From Four Minutes to Eight Seconds in an Hour [DEV5668]
Liron Amitzi, Senior Database Consultant, Brillix LTD
Code One Tracks: Database, Big Data, and Data Science
Session Type: Developer Session
Monday, Oct 22, 11:30 a.m. – 12:15 p.m. | Moscone West – Room 2003

 

Monday afternoon

Oracle Database in an Asynchronous World [TRN5515]
Dominic Giles, Master Product Manager, Oracle
Monday, Oct 22, 3:45 p.m. – 4:30 p.m. | Moscone West – Room 3003

which overlaps with

A Day in the Life of a Real-World Performance Engineer [TRN4026]
Graham Wood, Software Architect, Oracle
Robert Carlin, Software Development Manager, Real World Performance, Oracle Database Development, Oracle
Mihajlo Tekic, Oracle
Monday, Oct 22, 3:45 p.m. – 4:30 p.m. | Moscone West – Room 3004

which overlaps with

Build a Web App with Oracle REST Data Services and Oracle JavaScript Extension Toolkit [HOL6325]
Jeff Smith, Senior Principal Product Manager, Oracle
Ashley Chen, Senior Product Manager, Oracle
Colm Divilly, Consulting Member of Technical Staff, Oracle
Elizabeth Saunders, Principal Technical Staff, Oracle
Monday, Oct 22, 3:45 p.m. – 4:45 p.m. | Marriott Marquis (Yerba Buena Level) – Salon 3/4

 

Monday late afternoon

Oracle Optimizer and the Road to the Latest Generation of Oracle Database [PRO4009]
Nigel Bayliss, Senior Principal Product Manager, Oracle
Monday, Oct 22, 4:45 p.m. – 5:30 p.m. | Moscone West – Room 3005

which overlaps with

Oracle Autonomous OLTP Database Cloud Overview and Roadmap [PRO3978]
Maria Colgan, master product manager at Oracle Corporation , Oracle
Juan Loaiza, Senior Vice President, Oracle
Monday, Oct 22, 4:45 p.m. – 5:30 p.m. | Moscone West – Room 3003

which overlaps with

Data Warehouse Like a Tech Startup with Oracle Autonomous Data Warehouse Cloud [BUS3194]
Mark Rittman, Managing Director, MJR Analytics ltd
Monday, Oct 22, 4:45 p.m. – 5:30 p.m. | Moscone West – Room 3006

 

Monday evening

Data Management in a Microservices World [TIP4175]
Gerald Venzl, Senior Principal Product Manager, Oracle
Monday, Oct 22, 5:45 p.m. – 6:30 p.m. | Moscone West – Room 3003

which overlaps with

18(ish) Things Developers Will Love About Oracle Database 18c [PRO4093]
Chris Saxon, Developer Advocate for SQL, Oracle
Monday, Oct 22, 5:45 p.m. – 6:30 p.m. | Moscone West – Room 3009

which overlaps with

What’s New for Oracle SQL Developer [PRO4058]
Jeff Smith, Senior Principal Product Manager, Oracle
Monday, Oct 22, 5:45 p.m. – 6:30 p.m. | Moscone West – Room 3006

which overlaps with

What Everyone Should Know About Oracle Partitioning [PRO4046]
Hermann Baer, Senior Director Product Management, Oracle
Monday, Oct 22, 5:45 p.m. – 6:30 p.m. | Moscone West – Room 3005

 

Column Group Catalog

I seem to have written a number of aricles about column groups – the rather special, and most useful, variant on extended stats. To make it as easy as possible to find the right article I’ve decided to produce a little catalogue (catalog) of all the relevant articles, with a little note about the topic each article covers. Some of the articles will link to others in the list, and there are a few items in the list from other blogs. There are also a few items which are the titles of drafts which have been hanging around for the last few years.

 

Correcting datatypes with minimal downtime

Just a quick post here by request of an attendee of the September Office Hours. I had a demo converting the data type of a primary key without losing data and with minimal impact to the availability of the database using DBMS_REDEFINITION.  You can see that video here

but here is the script used to run that demo. 



SQL> drop table t purge;

Table dropped.

SQL> drop table t_interim purge;

Table dropped.

--
-- Source table with string primary key that we want to convert to numeric
--
SQL>
SQL> create table t as
  2  select
  3     to_char(object_id) pk
  4    ,owner
  5    ,object_name
  6    ,subobject_name
  7    ,object_id
  8    ,data_object_id
  9  from all_objects
 10  where object_id is not null;

Table created.

SQL>
SQL> alter table t add primary key ( pk );

Table altered.

--
-- Empty interim table with numeric string primary, representing our corrected data type
--
SQL>
SQL> create table t_interim as
  2  select
  3     object_id pk
  4    ,owner
  5    ,object_name
  6    ,subobject_name
  7    ,object_id
  8    ,data_object_id
  9  from all_objects
 10  where 1=0;

Table created.

SQL>
SQL> desc t
 Name                                                                    Null?    Type
 ----------------------------------------------------------------------- -------- -------------------------------------
 PK                                                                      NOT NULL VARCHAR2(40)
 OWNER                                                                   NOT NULL VARCHAR2(128)
 OBJECT_NAME                                                             NOT NULL VARCHAR2(128)
 SUBOBJECT_NAME                                                                   VARCHAR2(128)
 OBJECT_ID                                                               NOT NULL NUMBER
 DATA_OBJECT_ID                                                                   NUMBER

SQL>
SQL> desc t_interim
 Name                                                                    Null?    Type
 ----------------------------------------------------------------------- -------- -------------------------------------
 PK                                                                      NOT NULL NUMBER
 OWNER                                                                   NOT NULL VARCHAR2(128)
 OBJECT_NAME                                                             NOT NULL VARCHAR2(128)
 SUBOBJECT_NAME                                                                   VARCHAR2(128)
 OBJECT_ID                                                               NOT NULL NUMBER
 DATA_OBJECT_ID                                                                   NUMBER

--
-- Standard call will fail, because the columns do not align
--

SQL>
SQL> begin
  2    dbms_redefinition.start_redef_table
  3         (  uname           => user,
  4            orig_table      => 'T',
  5            int_table       => 'T_INTERIM'
  6            );
  7  end;
  8  /
begin
*
ERROR at line 1:
ORA-42016: shape of interim table does not match specified column mapping
ORA-06512: at "SYS.DBMS_REDEFINITION", line 109
ORA-06512: at "SYS.DBMS_REDEFINITION", line 3887
ORA-06512: at "SYS.DBMS_REDEFINITION", line 5208
ORA-06512: at line 2

--
-- Column mapping call fails, because the columns being manipulated is the primary key
--

SQL>
SQL> declare
  2    l_colmap varchar2(200) :=
  3      q'{   to_number(pk) as pk
  4           ,owner
  5           ,object_name
  6           ,subobject_name
  7           ,object_id
  8           ,data_object_id}';
  9  begin
 10    dbms_redefinition.start_redef_table
 11         (  uname           => user,
 12            orig_table      => 'T',
 13            int_table       => 'T_INTERIM',
 14            col_mapping   => l_colmap
 15            );
 16  end;
 17  /
declare
*
ERROR at line 1:
ORA-42008: error occurred while instantiating the redefinition
ORA-12016: materialized view does not include all primary key columns
ORA-06512: at "SYS.DBMS_REDEFINITION", line 109
ORA-06512: at "SYS.DBMS_REDEFINITION", line 3887
ORA-06512: at "SYS.DBMS_REDEFINITION", line 5208
ORA-06512: at line 10

--
-- Abort this run to reset our objects
--

SQL>
SQL> begin
  2    dbms_redefinition.abort_redef_table
  3         (  uname           => user,
  4            orig_table      => 'T',
  5            int_table       => 'T_INTERIM'
  6            );
  7  end;
  8  /

PL/SQL procedure successfully completed.

--
-- Use the ROWID in lieu of the primary key
--

SQL>
SQL> declare
  2    l_colmap varchar2(200) :=
  3      q'{   to_number(pk) as pk
  4           ,owner
  5           ,object_name
  6           ,subobject_name
  7           ,object_id
  8           ,data_object_id}';
  9  begin
 10    dbms_redefinition.start_redef_table
 11         (  uname           => user,
 12            orig_table      => 'T',
 13            int_table       => 'T_INTERIM',
 14            col_mapping   => l_colmap,
 15            options_flag=>DBMS_REDEFINITION.cons_use_rowid
 16            );
 17  end;
 18  /

PL/SQL procedure successfully completed.

--
-- Add our subordinate objects
--

SQL>
SQL> alter table t_interim add primary key ( pk );

Table altered.

--
-- And finally finish off the process (this is the window of unavailability)
--

SQL>
SQL> begin
  2    dbms_redefinition.finish_redef_table
  3         (  uname           => user,
  4            orig_table      => 'T',
  5            int_table       => 'T_INTERIM');
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL>
SQL> desc t
 Name                                                                    Null?    Type
 ----------------------------------------------------------------------- -------- -------------------------------------
 PK                                                                      NOT NULL NUMBER
 OWNER                                                                   NOT NULL VARCHAR2(128)
 OBJECT_NAME                                                             NOT NULL VARCHAR2(128)
 SUBOBJECT_NAME                                                                   VARCHAR2(128)
 OBJECT_ID                                                               NOT NULL NUMBER
 DATA_OBJECT_ID                                                                   NUMBER

Enjoy!

Upload large files to Oracle Cloud with the Object Storage REST API

A new blog post on the Databases at CERN blog:

Using the oci-curl() bash function to upload large files in multiple chunks to the OCI Object Storage service.

https://db-blog.web.cern.ch/blog/franck-pachot/2018-09-oracle-cloud-upload-large-files-through-object-store-rest-api

Oracle Cloud: upload large files through the Object Store REST API

Three Months at Microsoft

Time Flies when you’re having fun or when you’re crazy busy-  I’m unsure which is more true.

I just completed my first review as an employee at Microsoft and good news!  They’re going to keep me. </p />
</p></div>

    	  	<div class=

Datatype conversion laziness … yet another reason

I’ve got two words to describe my thoughts on being lazy with data types …. JUST DON’T! Smile

There’s been many blog posts, videos, AskTOM questions etc over the years with numerous examples of how the optimizer gets confused, how the SQL engine gets confused…and even how developers will ultimately get confused if you do not spend that little tiny bit of extra effort in ensuring data type consistency.

I personally wish we had an init.ora parameter called (say) “response_to_data_type_conversion” = IMPLICIT (default) or ERROR, so that when we encountered a data type conversion we could choose to report it back to the developer as an error, or implicitly try to convert it as we currently do. Then we could have it set to “error” at least in non-production environments to get a handle on where we are not being diligent.

But anyway, enough ranting Smile. The reason for this blog post to give yet another example of how making assumptions about data type handling can lead to spurious errors.

Let’s assume we have a simple requirement – to collapse a result set into a single CSV result stored in a CLOB. Here is my first cut at the code



SQL> set serverout on
SQL> declare
  2    l_csv  clob := empty_clob();
  3  begin
  4   for i in (select *
  5             from dba_objects )
  6   loop
  7        l_csv := l_csv || i.owner||','||i.object_name||','||i.object_id;
  8    end loop;
  9    dbms_output.put_line('length=' || dbms_lob.getlength(l_csv));
 10  end;
 11  /
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 7
ORA-06512: at line 7

And it fails. To get an idea as to why and where it fails, I’ll add an exception handler to capture the state of the CLOB.



SQL> set serverout on
SQL> declare
  2    l_csv  clob := empty_clob();
  3  begin
  4   for i in (select *
  5             from dba_objects )
  6   loop
  7        l_csv := l_csv || i.owner||','||i.object_name||','||i.object_id;
  8    end loop;
  9    dbms_output.put_line('length=' || dbms_lob.getlength(l_csv));
 10  exception
 11    when others then
 12      dbms_output.put_line('length=' || dbms_lob.getlength(l_csv));
 13      raise;
 14  end;
 15  /
length=32776
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 13
ORA-06512: at line 7
ORA-06512: at line 7

Now that number looks “interesting” in that it is right on the length limit for a VARCHAR2 in PL/SQL. But why would a VARCHAR2 come into play here? We are dealing with a CLOB and that should be allowed to get much larger. To further confuse things, look what happens when I perform functionally the same operation, but using an intermediate variable for each row fetched from the loop.



SQL> declare
  2    l_csv  clob := empty_clob();
  3    l_line varchar2(255);
  4  begin
  5   for i in (select *
  6             from dba_objects )
  7   loop
  8        l_line := i.owner||','||i.object_name||','||i.object_id;
  9        l_csv := l_csv || l_line;
 10    end loop;
 11    dbms_output.put_line('length ' || length(l_csv));
 12  end;
 13  /

PL/SQL procedure successfully completed.

And now it works! That seems extraordinary because the logic would appear to be identical.

The answer here is once again – not taking sufficient care with our data type conversions. The OBJECT_ID we are fetching is numeric. Because we are simply slamming that into a concatenation operator (||), we need to do some implicit data type conversion, and to achieve that, we need to do some casting into VARCHAR2. We are not privy to how the PL/SQL execution engine is performing the conversion, but the error suggests that all components of the expression (including the left hand side) are being casted to VARCHAR2 and hence our “clob” ultimately exceeds the 32k limit.

The resolution is simple – take control of the data type conversion as we should have done anyway:



SQL> set serverout on
SQL> declare
  2    l_csv  clob := empty_clob();
  3  begin
  4   for i in (select *
  5             from dba_objects )
  6   loop
  7        l_csv := l_csv || i.owner||','||i.object_name||','||to_char(i.object_id);
  8    end loop;
  9    dbms_output.put_line('length=' || dbms_lob.getlength(l_csv));
 10  end;
 11  /
length=3491433

PL/SQL procedure successfully completed.

The moral of the story remains the same. Consistent and explicit handling of data type conversions will give you more robust code.

Another little 12c improvement

You’ve got a huge table right? Massive! Immense! And then something bad happens. You get asked to remove one of the columns from that table.

“No problem” you think. “I won’t run the ‘drop column’ command because that will visit every block and take forever!”

So you settle on the perfect tool for such a scenario – simply mark the column as unused so that it is no longer available to application code and the developers that write that code.

But there’s a catch that not many people know about. SET UNUSED is meant to be just a trivial database dictionary operation. We don’t touch the data, we only manipulate the column definition and thus an execution of SET UNUSED should be instantaneous. This is indeed the case most of the time, but as the example below shows – the way the column was added to the table, can have a bearing on what it costs to have that column removed.


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

--
-- large-ish table to monitor the impact
--
SQL> create table t1
  2   as
  3   select d.*
  4   from dba_objects d,
  5   ( select 1 from dual connect by level <= 100);

Table created.

SQL> set timing on
SQL> select count(*) from t1;

  COUNT(*)
----------
   8713700

Elapsed: 00:00:01.92

--
-- Take an existing column to UNUSED ... instantaneous!
--
SQL> alter table t1 set unused column created;

Table altered.

Elapsed: 00:00:00.06


--
-- Add a new column, and then make it UNUSED ... instantaneous!
--
SQL> alter table t1 add new_col1 timestamp;

Table altered.

Elapsed: 00:00:00.01
SQL> alter table t1 set unused column new_col1;

Table altered.

Elapsed: 00:00:00.00


--
-- Add a new columns with a not null default, and then make it UNUSED ... careful!
--
SQL> alter table t1 add new_col2 timestamp default sysdate not null;

Table altered.

Elapsed: 00:00:00.00
SQL> alter table t1 set unused column new_col2;

Table altered.

Elapsed: 00:01:35.39

You will not get the same issue in 12c.


SQL> select banner from v$version;

BANNER
-----------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> create table t1  nologging
  2   as
  3   select d.*
  4   from dba_objects d,
  5   ( select 1 from dual connect by level <= 100);

Table created.

SQL>
SQL> set timing on
SQL> select count(*) from t1;

  COUNT(*)
----------
   7951500

Elapsed: 00:00:02.20
SQL> alter table t1 set unused column created;

Table altered.

Elapsed: 00:00:00.08
SQL>
SQL> alter table t1 add new_col1 timestamp;

Table altered.

Elapsed: 00:00:00.00
SQL> alter table t1 set unused column new_col1;

Table altered.

Elapsed: 00:00:00.00
SQL>
SQL> alter table t1 add new_col2 timestamp default sysdate not null;

Table altered.

Elapsed: 00:00:00.01
SQL> alter table t1 set unused column new_col2;

Table altered.

Elapsed: 00:00:00.01
SQL>

#Exasol Database whoami

This little script displays some useful meta-information:

SQL_EXA> create schema myschema;
EXA: create schema myschema;

Rows affected: 0

SQL_EXA> create or replace script whoami as
 output('Current User: '.. tostring(exa.meta.current_user))
 output('Current Schema: '.. tostring(exa.meta.current_schema))
 output('Session ID: '.. tostring(exa.meta.session_id))
 output('Database Version: '.. tostring(exa.meta.database_version))
 output('Number of Nodes: '.. tostring(exa.meta.node_count))
 /
EXA:create or replace script whoami as...

Rows affected: 0
SQL_EXA> col output for a40;
COLUMN   output ON
FORMAT   a40
SQL_EXA> execute script whoami with output;
EXA: execute script whoami with output;

OUTPUT
----------------------------------------
Current User: SYS
Current Schema: MYSCHEMA
Session ID: 1612024483893367379
Database Version: 6.1.0-alpha1
Number of Nodes: 1

5 rows in resultset.

All available metadata is documented here (Chapter 3. Concepts -> 3.6 UDF scripts).

Configuring Firewall Access for SQL Database on Azure

Its quite common that the first time you work with a feature in a cloud interface, it can appear so foreign until you’ve done it a few times.  If it’s a task that you may not do often or only once, it can be downright painful.  It doesn’t matter who the cloud provider is or the application, we all have felt the pain of this type of situation and why some of us even started blogging…:)  Until its familiar, it may not feel comfortable and you may not even like how it works.

As a newer employee at Microsoft, I’m often learning right along with my customers.  Lucky for me, I learn quick, but I feel their pain and if something slows me down, then I know it will for the customer, too.  One of the tasks that I noticed tripped us up in configuring SQL Database on Azure was configuring the firewall for SSMS, (SQL Server Management Studio) or Visual Studio, (VS) access.  We all know that having access for SSMS is important for many DBAs to have a single pane of glass for database management and for me, well, I like to do everything in Visual Studio-  I’m just strange that way.

The Set up

To configure access for SSMS to access SQL Database, log into your Azure portal, then click on the database you wish to configure access for, (sorry folks, I need this play environment for a while, so I’ve removed some info from the screenshots):

https://dbakevlar.com/wp-content/uploads/2018/08/conf_firewall-300x124.png 300w, https://dbakevlar.com/wp-content/uploads/2018/08/conf_firewall-768x319.png 768w" sizes="(max-width: 650px) 100vw, 650px" />

Click on Set Server Firewall at the top, which will bring you to the following screen:

https://dbakevlar.com/wp-content/uploads/2018/08/conf_firewall3-300x235.png 300w, https://dbakevlar.com/wp-content/uploads/2018/08/conf_firewall3-768x602.png 768w, https://dbakevlar.com/wp-content/uploads/2018/08/conf_firewall3.png 1447w" sizes="(max-width: 650px) 100vw, 650px" />

The one thing I really like about Azure is that it displays your IP address in the console, assuming this may be the information you require.  If you are attempting to add this to access a local installation of VS or SSMS, simply enter in an identifying name for your rule and then add the IP address or a range of IP Addresses in the Start IP and End IP field, then click Save.

If you need to retrieve the IP address for another host or workstation, you can do this by opening the command prompt and type in ipconfig.  From there, look at the IPv4 address:

https://dbakevlar.com/wp-content/uploads/2018/08/conf_firewall2-300x70.png 300w, https://dbakevlar.com/wp-content/uploads/2018/08/conf_firewall2-768x179.png 768w" sizes="(max-width: 650px) 100vw, 650px" />

Now know that if you work from numerous WiFi, MiFi and other network connections, your IP Address will change.  If your IP address changes, you’ll need to add a new rule or if it’s a temporary location, just remove the rule and create a new one for your new access point to keep things uncluttered.

Connect Time

Once this is performed, it’s time to connect to your Azure SQL Database.  If you’re asking me how you find that information, let’s go back to our main page for the database:

https://dbakevlar.com/wp-content/uploads/2018/08/Conf_firewall4-300x107.png 300w, https://dbakevlar.com/wp-content/uploads/2018/08/Conf_firewall4-768x275.png 768w" sizes="(max-width: 650px) 100vw, 650px" />

Clock on the link for  Show database connection strings and it will take you to the following, displaying the JDBC.net connection info in the second tab:

https://dbakevlar.com/wp-content/uploads/2018/08/conf_firewall5-1-300x87... 300w, https://dbakevlar.com/wp-content/uploads/2018/08/conf_firewall5-1-768x22... 768w" sizes="(max-width: 650px) 100vw, 650px" />

Well, look there, all you need is the first thing displayed after Server=.  Note that it’s a “,” and not a “:” between the server name and the port, as you’ll need to do the same in SSMS or VS when you enter the connection.  Your user name has an addition too:  Username@domain.com@sql database name.

After you enter this, use the password you configured your database with.  If you forgot, the main page in the portal has a “reset password” link for you to update it.

You’ll then connect with something that looks like the following:

https://dbakevlar.com/wp-content/uploads/2018/08/conf_firewall6-300x201.png 300w, https://dbakevlar.com/wp-content/uploads/2018/08/conf_firewall6-768x514.png 768w, https://dbakevlar.com/wp-content/uploads/2018/08/conf_firewall6.png 1196w" sizes="(max-width: 519px) 100vw, 519px" />

I can then switch to a user database and manage Azure SQL Databases just as I have if they were on-prem.  Enjoy!



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBAKevlar [Configuring Firewall Access for SQL Database on Azure], All Right Reserved. 2018.