JDBC_00 - Cannot connect to specified database: com.zaxxer.hikari.pool.HikariPool$
PoolInitializationException: Failed to initialize pool: Listener refused the connection with the following error: ORA-12505, TNS:listener does not currently know of SID given in connect descriptor
We faced this issue connecting to Oracle 19C PDB using connection string format
jdbc:oracle:thin@<host_url>:<port>:<OracleSID>
Resolution: The fix required is to use a forward slash instead of colon.
--identifying lob segments using more space:
select sg.segment_name, lb.table_name , lb.column_name, sum(sg.bytes)/(1024*1024) mb
from dba_segments sg, dba_lobs lb
where sg.segment_name = lb.segment_name
group by sg.segment_name, lb.table_name , lb.column_name order by mb desc;
--generating shrink command to reduce the lob sizes
select sg.segment_name, lb.owner, lb.table_name, lb.column_name, sum (sg.bytes)/(1024*1024) mb
'alter table ||lb.owner||'.'|| lb.table_name || modify lob (' || lb.column_name || )
(shrink space) ;' shrink_command
from dba_segments sg, dba_lobs lb
where sg.segment_name = lb.segment_name
group by sg.segment_name, lb.owner, lb.table_name, lb.column_name order by mb desc;
--example:
alter table hr.employee modify lob (emp_address) (shrink space);
We wanted to schedule a nightly backup in a new Oracle 19c DB – however, we could not trigger unix script (expdp) from DBMS_SCHEDULER due a linux PAM missing. So, as a temporary arrangement we used DBMS_DATAPUMP API to perform this task as below. DBMS_DATAPUMP API comes handy when you want to perform export / import entirely within SQL Developer and don’t prefer to use the Unix command prompt.
We created a Procedure EXPORT_SCHEMA and scheduled to run daily as below. You can also find the links I referred below to prepare this.
create or replace NONEDITIONABLE procedure SYSTEM.EXPORT_SCHEMA IS
-- Data Pump job handle
h1 NUMBER;
v_current_time DATE := sysdate;
V_DBNAME varchar2(8) := ora_database_name;
v_schema_expression varchar2(100) := 'LIKE ''%MYSCHEMAS''';
v_dump_file_name VARCHAR2 (100);
v_log_file_name VARCHAR2 (100); --
BEGIN --
h1 := dbms_datapump.open('EXPORT', 'SCHEMA', NULL, 'EXP_JOB_' | to char(v_current_time, "YYYY_MM_DD_HH24MI'), 'LATEST'); --
v_ dump_file_name := V_DBNAME || 'DAILY_BACKUP' || trim(to_char(v_current_time, 'DAY')) || '%U.dmp; --
v_log_file_name := V_DBNAME || '_DAILY_BACKUP'
|| trim(to_char(v_current_time, 'DAY'))
|| '_' || to_char(v_current_time, 'YYYY-MM-DD-HH24MI') || '.log'; --
dbms_datapump.add_file (handle => h1, filename => v_dump_file_name, directory => 'DATA_PUMP_DIR', fíletype => dbms_datapump.ku$_file_type_dump_file, reusefile => 1); --
dbms_datapump.add_file (handle => h1, filename > v_log_file_name, directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file); --
--A metadata filter is used to specify the schema that will be exported
dbms_datapump.metadata_filter(h1, 'SCHEMA_EXPR', v_schema_expression) ; --
dbms_datapump.set_parallel (h1, 8); --
dbms_datapump.set_parameter(handle=>h1,name=>'COMPRESSION',value=>'ALL'); -- --exclude jobs DBMS_DATAPUMP.METADATA_FILTER(l_job_handle, 'EXCLUDE_PATH_EXPR', 'IN (''JOB'')'); --
dbms_datapump.start_job(h1); --
dbms_datapump.detach(h1);
END
/
grant execute on SYSTEM.EXPORT_SCHEMA TO MYUSER;
/
begin
dbms_scheduler.create_job(
job_name=>'EXPORT_DATA_PUMP',
job_type=> 'PLSQL_BLOCK',
job_action=> 'BEGIN SYSTEM.EXPORT_SCHEMA; END;' ,
start_date => SYSTIMESTAMP
end_date => SYSTIMESTAMP + INTERVAL '3650' day,
repeat_interval => 'freq=daily; byday=mon, tue, wed, thu, fri, sat, sun; byhour=20; byminute=0',
comments=>'Nightly Backup',
enabled=>TRUE,
auto_drop=>TRUE);
end;
The above job will have 7 backup files being used that have weekday names in the file name and recycled / reused every week. If you cannot accommodate 7 backup files, you can use the following.
create or replace NONEDITIONABLE procedure SYSTEM.EXPORT_SCHEMA IS
-- Data Pump job handle
h1 NUMBER;
v_current_time DATE := sysdate;
V_DBNAME varchar2(8) := ora_database_name;
v_schema_expression varchar2(100) := 'LIKE ''%MYSCHEMAS'''; v_num_dumps_to_retain number :=4;
v_dump_file_name VARCHAR2 (100);
v_log_file_name VARCHAR2 (100); --
BEGIN --
h1 := dbms_datapump.open('EXPORT', 'SCHEMA', NULL, 'EXP_JOB_' | to char(v_current_time, "YYYY_MM_DD_HH24MI'), 'LATEST'); --
v_ dump_file_name := V_DBNAME || 'DAILY_BACKUP' || --trim(to_char(v_current_time, 'DAY')) mod(to_number(trim(to_char(v_current_time,'J'))),v_num_dumps_to_retain) || '%U.dmp; --
v_log_file_name := V_DBNAME || '_DAILY_BACKUP'
|| trim(to_char(v_current_time, 'DAY')) || '_' || to_char(v_current_time, 'YYYY-MM-DD-HH24MI') || '.log'; --
dbms_datapump.add_file (handle => h1, filename => v_dump_file_name, directory => 'DATA_PUMP_DIR', fíletype => dbms_datapump.ku$_file_type_dump_file, reusefile => 1); --
dbms_datapump.add_file (handle => h1, filename > v_log_file_name, directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file); --
--A metadata filter is used to specify the schema that will be exported
dbms_datapump.metadata_filter(h1, 'SCHEMA_EXPR', v_schema_expression) ; --
dbms_datapump.set_parallel (h1, 8); --
dbms_datapump.set_parameter(handle=>h1,name=>'COMPRESSION',value=>'ALL'); -- --exclude jobs DBMS_DATAPUMP.METADATA_FILTER(l_job_handle, 'EXCLUDE_PATH_EXPR', 'IN (''JOB'')'); --
dbms_datapump.start_job(h1); --
dbms_datapump.detach(h1);
END
/
SELECT * FROM redaction_columns;
OBJECT_OWNER|OBJECT_NAME|COLUMN_NAME|FUNCTION_TYPE|FUNCTION_PARAMETERS|REGEXP_PATTERN|REGEXP_REPLACE_STRING|REGEXP_POSITION|REGEXP_OCCURRENCE|REGEXP_MATCH_PARAMETER|COLUMN_DESCRIPTION|
SELECT * FROM redaction_policies;
OBJECT_OWNER|OBJECT_NAME|POLICY_NAME|EXPRESSION|ENABLE|POLICY_DESCRIPTION|
Oracle indexes may get into UNUSABLE state due to multiple reasons – one of them could be a partition of a Partitioned table containing Global indexes was dropped without using ‘UPDATE INDEXES’ option.
Identifying indexes that are in UNUSABLE state (for a table):- select * from all_indexes where status = 'UNUSABLE' AND TABLE_NAME = 'my_table';
Rebuild the indexes so that they change to ‘VALID’ state. (use ‘Online’ option if the table is continuously used)
select 'alter index ' || OWNER || '.' || INDEX_NAME || ' rebuild online;' from all_indexes where status = 'UNUSABLE' AND TABLE_NAME = 'my_table';
Find out which session is currently locking (causing deadlock) a particular object:-
select a.sid, a.serial# from v$session a, v$locked_object b, dba_objects c where b.object_id = c.object_id and a.sid = b.session_id and OBJECT_NAME='MY_TABLE';
Find out which session(s) have locked an object in the history:-
Step 1:- Scan the active session history for your session which is blocked by another session
select * from dba_hist_active_sess_history where blocking_session is not null and current_obj# IN (select object_id from dba_objects where owner = 'MY_SCHEMA' --Enter the object owner here: --and object_name = 'TEST1 --Enter the object name here, );
Step 2:- Find the SQL ID to see what the blocking session was exactly doing
SELECT * FROM dba_hist_active_sess_history WHERE SESSION_ID = 1234 AND SESSION_SERIAL# = 12345 and current_obj# IN (select object_id from dba_objects where owner = 'MY_SCHEMA' --Enter the object owner here: --and object_name = 'TEST1 --Enter the object name here, );
Step 3:- Check the sql_fulltext of the sql_id in v$sql.
select sql_fulltext from v$sql;
Based on the following sql I got from stackoverflow.
SELECT * from gv$active_session_history --Or use this table for further back. --from dba_hist_active_sess_history where blocking_session is not null and current_obj# = ( select object_id from dba_objects where owner = 'JHELLER' --Enter the object owner here. and object_name = 'TEST1' --Enter the object name here. );