Thursday, August 29, 2019

SQL Server log shipping implementation Steps

SQL Server log shipping implementation Steps
1).Create user Account with same name and password on both computers. Make sure File            Sharing is enabled on the local area connection between the server. Also enable file sharing in Firewall.
2. Now create a folder named “bkp” on both servers.
3. On the both Servers share bkp folder with full accsess permission. Allow the “user”access permission on bkp on both the servers
      5).Now go to Control Panel->Administrative Tools->Services and find the SQL Server               Agent service. Go to its properties and set “user” as the account on the Logon tab. Restart the service. Do this on both servers.



                                             
6.On “sa” account turn off Password Expiration Policy. This prevents sa password from expiring automatically.
7.Take full backup as follow :
1)Right click on database which you want to backed up select “TASK” option and then
“BACKUP” option.
ii)Select the option and give the format.It will help you to backup other then
default path of  backup.


iii) add the path location where you want to take backup.


8. Take the backup of transaction log as follow.
I) Right click on database which you want to backed up select “TASK” option and then
“BACKUP” option.  Select the transaction log from backup type.
iii)Select the option and give the format.It will help you to backup other then
default path of  backup.

NOTE: when taking backup of transaction log manually then don't use trn  extension
use .tra .extenstion .it will effects the procedures created for monitoring.

User tra



iv) add the path location where you want to take backup.



9. Copy the backup into c:\bkp the location of the second server (Secondary Server)
10. then restore the database as follow :
I) Select database Right click select “RESTORE DATABASE” option.


ii) select thelocation where you have copied full backup on the secondary server of primary server




ii) select the option and select Select Restore “WITH STANDBY “ Option


11. Apply the transaction log in as given below
I) select the databse which you restored from full backup Right clik on it and select
“TASK” option and then “RESTORE” and “TRANSACTION LOG” after that.



i)select the location where you have copied transation log backup on the secondary server of primary server
iii) select the option and select Select Restore “WITH STANDBY “ Option
12. Now have to create  STANDBY DATABASE on secondary server

13. To start log sipping from primary server select the option as follow
i)Right click on database for  which you want to create STAND BY
ii)select “TASK “ and then select “ SHIP TRANSATION LOGS”




 iii)Select  this as a primary Database option


iv)Give the full path of the primary server from where logs  have been generating
     
   

v)You can change the job scheduler on click the “SCHEDULER “ tab
and set the retention parameters for backup piece have to delete on secondary server  .


connect to the secondary  server using ”CONNECT TAB”








select the e Destination of the Secondary Server where you want to copy log transaction backup


ii)Use Standby mode with Disconnect users in the database when restoring backup.





Job have been created  .
14. Every thing is fine if you will see One Job  LSBackup on Primary Server






15.And Two Jobs LSCopy , LSRestore on Secondary Server



16.Then start the jobs in the sequence LSbackup,LScopy,LSRestore (they are schedule other way)





17.You can see job history if any problem accrue






Restore standby database using incremental backup

Restore standby database using incremental backup

   1)      Find the current scn of Standby database

select  current_scn from v$database;

CURRENT_SCN
-----------
    750098

    2)      Take incremental backup  on primary database  start from the scn  shown in step 1

BACKUP DEVICE TYPE DISK INCREMENTAL FROM SCN 750098 DATABASE
     FORMAT '/tmp/incr_standby/backup_%U';

   3)      Create control file backup  for standby

ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/home/oracle/standby_control.ctl';

   4)      Copy the incremental backup and control file backup from primary to secondary database

   5)      Check the status of standby and controlfile location

select open_mode , database_role ,name from v$database;

OPEN_MODE  DATABASE_ROLE    NAME
---------- ---------------- ---------
MOUNTED    PHYSICAL STANDBY ORCL

Select name , value from  v$parameter where name=’control_file’;

NAME            VALUE
--------------- --------------------------------------------------
control_files   /u01/app/oracle/oradata/orcl/control01.ctl, /u01/a
pp/oracle/flash_recovery_area/orcl/control02.ctl


    6)      Cancel recovery on standby  stop  standby database

 Alter database recover managed standby database cancel;
Database altered.

SQL>  shu immediate ;
    
     7)      Recover control file


rman target /

Restore  controlfile from ‘/tmp/incr_standby/stby.ctl';
startup mount


    8)      Catalog the backup  on standby database


catalog start with  '/tmp/incr_standby/';


    9)      Restore database

RMAN>  recover database noredo;


     10)   Start recover y on standby database

alter database recover managed standby database disconnect from session ;


Enterprise Manager Grid Control Architecture

Enterprise Manager Grid Control Architecture

Although Enterprise Manager Grid Control is viewed as a single entity, technically, it
is built with the following software components

   1)      Oracle Management Agent (Management Agent)

Management Agent is an integral software component that is deployed on each
monitored host. It is responsible for monitoring all the targets running on those
hosts, communicating that information to the middle-tier Oracle Management
Service, and managing and maintaining the hosts and its targets.

Configuration file for Management Agent is $OH/ <hostname>_<sid>/ sysman/ config/ emd.properties. Configuration file for Management Agent is $OH/ <hostname>_<sid>/ sysman/ config/ emd.properties. Agent uploads managed data (from server its monitoring) to Management Service via HTTP Server (URL of management Service HTTP Server is defined by REPOSITORY_URL parameter in emd.properties)
     Management Agent software also includes inbuilt HTTP Listener (different from standalone HTTP Server) to accept messages(data) from Management Service and this URL is defined by parameter EMD_URL in emd.properties.

Management Agent Log & Trace files are in $AGENT_HOME (or $OH/ <hostname>_<sid>) / sysman/ emagent.log, emagent.trc, emagentfetchlet.log, emagentfetchlet.trc , emagent.nohup

   2)      Oracle Management Service (OMS)

OMS is a J2EE Web application that orchestrates with Management Agents to
discover targets, monitor and manage them, and store the collected information in
a repository for future reference and analysis. OMS also renders the user interface
for the Enterprise Manager Grid Control console. OMS is deployed to the Oracle
Middleware home, which is the parent directory that has the Oracle WebLogic
Server home, the Web tier instance files, and, optionally, one or more Oracle
homes.

 Configuration file for Management Service is $OH/<hostname>_<sid>/ sysman/ config/ emoms.properties.
               Repository connection details are defined by parameter emdRepSID, emdRepServer, emdRepConnectDescriptor and emdRepUser in emoms.properties
  Management Service also monitors Management Agent (to check its up and running), submit enterprise manager jobs and other functions using EMD_URL (Management Agent runs inbuilt HTTP listener) defined in emoms.properties

Management Service Log & Tracefiles are in $AS_HOME (or $OH/ <hostname>_<sid>)  / sysman/ emoms.log, emoms.trc
    3)      Oracle Management Repository (Management Repository)

Management Repository is the storage location where all the information collected
by the Management Agent gets stored. It consists of objects such as database jobs,
packages, procedures, views, and tablespaces.
Management Service connects to Management Repository using JDBC
    4)      Enterprise Manager Grid Control Console

Enterprise Manager Grid Control Console is the user interface you see after you
install Enterprise Manager Grid Control. With the help of the console, you can
monitor and administer your entire computing environment from one location on
the network. All the services within your enterprise, including hosts, databases,
listeners, application servers, and so on, are easily managed from one central


5. HTTP Server: recieve requests from webcache, pulls data from Repository via Management Serviceand return response back to Webcache. HTTP Server also receive data from Management Agents, forward it to Management Service to store them in Management Repository.

6. Webcache : acts as web accelerator; forward request from Users (console) to HTTP Server and response back from HTTP server to Users (Console)






.
Communication between Management Agent & Service

Management Agent connect to Management Service via HTTP Server of Grid Control Middleware where as Management Service connect directly (using HTTP protocol) to Management Agent (Agent software include inbuilt http server)

Friday, August 9, 2019

ASM Interview Questions

ASM Interview Questions.

ASM Architecture
Q. What init.ora parameters does a user need to configure for ASM instances?
Ans. The default parameter settings work perfectly for ASM. The only parameters needed for 11g ASM:
• PROCESSES
• ASM_DISKSTRING*
• ASM_DISKGROUPS*
• INSTANCE_TYPE*
Q. How does the database interact with the ASM instance and how do I make ASM go faster?
Ans. ASM is not in the I/O path so ASM does not impede the database file access. Since the RDBMS instance is performing raw I/O, the I/O is as fast as possible.
Q. Do I need to define the RDBMS FILESYSTEMIO_OPTIONS parameter when I use ASM?
Ans.  No, the RDBMS does I/O directly to the raw disk devices, the FILESYSTEMIO_OPTIONS parameter is only for filesystems.
For more information visit: https://www.dbatrainings.com/
Q. Don’t I lose all the advanced filesystem features when I move to AMS; e.g., direct I/O, write coalescing, and pre-fetch?
Ans. Yes, but that’s okay.
• Most of the filesystem features mentioned, though good for general file data performance, interfere and fractionalize the benefits inherently provided by the database; e.g.,
• DBWR & LGWR does write coalescing, and user processes do sequential pre-fetches
• All IO capable processes do un-buffered IO (raw IO) because of ASM
Q. This is cool that ASM can now store Vote and OCR files. But how does CSS and CRS startup in this configuration?
Ans. It just does
•There are two keys processes and a lot of crafty coding to get this to work in the correct startup sequence.
ASMCMD> pwd
+DATA/rst-cluster/OCRFILE
ASMCMD> ls -l
Type   Redund  Striped   Time   Sys  Name
OCRFILE  UNPROT  COARSE  JUN 25 11:00:00 Y  REGISTRY.255.718984285
ASMCMD> lsdg
State   Type   Rebal  Sector   Block  AU  Total_MB   Free_MB
Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files Name
MOUNTED  EXTERN  N  512 4096  1048576  203824 193028
0  193028  0  Y DATA/
ASM Configuration
Q. Do I need 11gR2 Grid Infrastructure to use ASM?
Ans.  Yes. ASM is now part of Grid Infrastructure, which includes, Clusterware, ASM and ACFS. So you’ll to Install GI to use ASM
• In 11gR2 there are two options for install – GI for Standalone Server (aka Oracle Restart) and GI for Clusterware
Q. We have a 16 TB database. I’m curious about the number of disk groups we should use; e.g. 1 large disk group, a couple of disk groups, or otherwise? What about a database consolidation scenario.
Ans. For VLDBs you will probably end up with different storage tiers; e.g with some of our large customers they have Tier1 (RAID10 FC), Tier2 (RAID5 FC), Tier3 (SATA), etc. Each one of these is mapped to a diskgroup.
Q. What is the best LUN size for ASM
Ans. There is no best size! In most cases, the storage team will dictate based on their standardized LUN
size. The ASM admin merely has to communicate the ASM Best Practices and application characteristics to storage folks :
• Need equally sized / performance LUNs
• Minimum of 4 LUNs
• The capacity requirement
• The workload characteristic (random r/w, sequential r/w) & any response time SLA
Using this info, and their standards, the storage folks should build a nice LUN group set for you
For more information visit: https://www.dbatrainings.com/
Q. In 11gR2 can my RDBMS and ASM instances run different versions?
Ans. Yes. But since ASM is now part of GI stack, it must be at the highest version. Keep in mind, there’s two components of compatibility:
• Software compatibility
• Diskgroup compatibility attributes:
• compatible.asm
• compatible.rdbms
• Need to have compatible.asm set to 11.2.0.1 for OCR/Vote files and SPFILE in ASM
•ACFS also needs 11.2.0.1 compatible.asm
•Advance compatible.asm
• ALTER DISKGROUP data SET ATTRIBUTE ‘compatible.asm’ = ’11.2.0.1.0’
Q. Where do I run my database listener from; i.e., ASM HOME or DB HOME?
Ans. For 11gR2, the SCAN listener is run from GI Home, and database listener from DB HOME.
• For pre-11gR2, it is recommended to run the listener from the ASM HOME. This is particularly important for RAC env, since the listener is a node-level resource. In this config, you can create additional [user] listeners from the database homes as needed.
Backups
Q. How do I backup my ASM instance?
Ans. Not applicable! ASM has no files to backup
Q. When should I use RMAN and when should I use ASMCMD copy?
Ans. RMAN is the recommended and most complete and flexible method to backup and transport database files in ASM.
ASMCMD copy is good for copying single files
• Supports all Oracle file types
• In some cases. can be used to instantiate a Data Guard environment
• Does not update the controlfile
• Does not create OMF files
ASMCMD> ls
+fra/dumpsets/expdp_5_5.dat
ASMCMD> cp expdp_5_5.dat sys@rac1.orcl1:+DATA/dumpsets/expdp_5_5.dat
source +fra/dumpsets/expdp_5_5.dat
target +DATA/dumpsets/expdp_5_5.dat
copying file(s)…
file, +DATA/dumpsets/expdp_5_5.dat,
copy committed.
Migration
Q. We are migrating to a new storage array. How do I move my ASM database from storage A to storage B?
Ans:
• Given that the new and old storage are both visible to ASM, simply add the new disks to the ASM disk group and drop the old disks. ASM rebalance will migratedata online.
• For pre-11gR2, See Note 428681.1, which covers how to move OCR/Voting disks to the new storage array
ASM_SQL> alter diskgroup DATA drop disk data_legacy1, data_legacy2, data_legacy3 add disk
‘/dev/sddb1’, ‘/dev/sddc1’, ‘/dev/sddd1’;
ASM Rebalancing
• Automatic online rebalance whenever storage configuration changes
• Only move data proportional to storage added
• No need for manual I/O tuning
• Online migration to new storage
Q. Is it possible to unplug an ASM disk group from one platform and plug into a server on another platform (for example, from Solaris to Linux)?
Ans. No. Cross-platform disk group migration not supported. To move datafiles between endian-ness
platforms, you need to use XTTS, Datapump or Streams.
ACFS
Q. What is ASM Cluster File System (ACFS)?
• General purpose scalable file system
• Journaling, extent based
• Single node and cluster
• POSIX, X/OPEN file system solution for UNIX/Linux
• Windows file system solution for Windows platforms
• Accessible through NAS protocols (NFS, CIFS)
• Leverages ASM technology
• Integrated with Oracle Clusterware for cluster support
• Multi OS platform (Linux and Windows at initial release)
• Integrated with Oracle system mgt tools
• Oracle installation and configuration
• Enterprise Manager and ASM Storage mgt tools
• Native OS File System Management tools
ACFS Features
Provides filesystem snapshots (FCOW)
• File system integrity and fast recovery via ACFS metadata checksums and journaling.
• ACFS designed as a peer to peer, multi-node, shared file system model and delivers coherent data access
• ACFS file system is installed as a dynamically loadable OS VFS driver
• Starting with RHEL5, Redhat now supports a ‘white list’ -kernel APIs which they commit they will not change in updates or patches. APIs used by ACFS-ADVM were added to their ‘white list’.
• Customers should be able to install an update or patch to the kernel and our drivers should not be impacted
Q. Is ACFS supported on other platforms besides Linux 
Ans. Yes. Other platforms are forthcoming
Q. Can ACFS be used to store database datafiles? What about archive logs?
Ans. No. Currently we will not support database file to bestored in ACFS. This is due to performance reasons. Though you can do this in test/Q&A environments where performance is not essential
Q. Can I sue ACFS to store  BFILE data or other non-database related data
Ans. Yes. ACFS is POSIX compliant filesystem, and thus can store any file data type (besides database files ☺)
Q. Will ACFS support other Data services, such advanced cloning, replication, de-dupe, etc..
Ans. Yes. ACFS Replication will be introduced in the next patchset release. Other advanced features are part of the roadmap.
3rd Party Software
For more information visit: https://www.dbatrainings.com/
Q. How does ASM work with multipathing software?
Ans: It works great! Multipathing software is at a layer lower than ASM, and thus is transparent.
You may need to adjust ASM_DISKSTRING to specify only the path to the multipathing pseudo devices.
Q. Is ASM constantly rebalancing to manage “hot spots”?
Ans. No…No…Nope!!
Q. Is ASMLIB required on Linux systems and are there any benefits to using it?
Ans. ASMLIB is not required to run ASM, but it is certainly recommended.
ASMLIB has following benefits:
• Simplified disk discovery
• Persistent disk names
• Efficient use of system resources
Q. Is it possible to do rolling upgrades on ASMLIB in a RAC configuration
Ans. ASMLIB is independent of Oracle Clusterware and Oracle Database, and thus can be upgraded on its own.
Conclusion:
•ASM requires very few parameters to run
•ASM based databases inherently leverage raw disk performance
•No additional database parameters needed to support ASM
•Mixed ASM-database version support
•Facilitates online storage changes
•RMAN recommended for backing up ASM based databases
•Spreads I/O evenly across all disks to maximize performance and eliminates hot spot
*************************ASM genral questions. *********************
1. What is the use of ASM (or) Why ASM preferred over filesystem?
Ans:       ASM provides striping and mirroring.
2. What are the init parameters related to ASM?
INSTANCE_TYPE = ASM
ASM_POWER_LIMIT = 11
ASM_DISKSTRING = ‘/dev/rdsk/*s2’, ‘/dev/rdsk/c1*’
ASM_DISKGROUPS = DG_DATA, DG_FRA
3. What is rebalancing (or) what is the use of ASM_POWER_LIMIT?
ASM_POWER_LIMIT is dynamic parameter, which will be useful for rebalancing the data across disks.
Value can be 1(lowest) to 11 (highest).
4. What are different types of redundancies in ASM & explain?
External redundancy,
Normal redundancy,
High redundancy.
5. How to copy file to/from ASM from/to filesystem?
By using ASMCMD cp command
6. How to find out the databases, which are using the ASM instance?
ASMCMD> lsct
SQL> select DB_NAME from V$ASM_CLIENT;
7. What are different types of stripings in ASM & their differences?
Fine-grained striping
Coarse-grained striping
lsdg
select NAME,ALLOCATION_UNIT_SIZE from v$asm_diskgroup;
8. What is allocation unit and what is default value of au_size and how to change?
Every ASM disk is divided into allocation units (AU). An AU is the fundamental unit of allocation within a disk group. A file extent consists of one or more AU. An ASM file consists of one or more file extents.
CREATE DISKGROUP disk_group_2 EXTERNAL REDUNDANCY DISK ‘/dev/sde1’ ATRRIBUTE ‘au_size’ = ’32M’;
9. What are the background processes in ASM?
10. What process does the rebalancing?
RBAL, ARBn
11. How to add/remove disk to/from diskgroup?
Oracle Data Guard Interview Questions
1. How to setup Data Guard?
2. What are different types of modes in Data Guard and which is default?
Maximum performance:
This is the default protection mode. It provides the high level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log.
Maximum protection:
This protection mode ensures that no data loss will occur if the primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions.
Maximum availability:
This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one standby database.
3. How many standby databases we can create (in 10g/11g)?
Till Oracle 10g, 9 standby databases are supported.
From Oracle 11g R2, we can create 30 standby databases.
4. What are the parameters we’ve to set in primary/standby for Data Guard?
5. What is the use of fal_server & fal_client, is it mandatory to set these?
6. What are differences between physical, logical, snapshot standby and ADG (or) what are different types of standby databases?
Physical standby – in mount state, MRP will apply archives
ADG – in READ ONLY state, MRP will apply archives
Logical standby – in READ ONLY state, LSP will run
Snapshot standby databases – Physical standby database can be converted to snapshot standby database, which will be in READ WRITE mode, can do any kind of testing, then we can convert back snapshot standby database to physical standby database and start MRP which will apply all pending archives.
7. How to find out backlog of standby?
select round((sysdate – a.NEXT_TIME)*24*60) as “Backlog”,m.SEQUENCE#-1 “Seq Applied”,m.process, m.status
from v$archived_log a, (select process,SEQUENCE#, status from v$managed_standby where process like ‘%MRP%’)m where a.SEQUENCE#=(m.SEQUENCE#-1);
8. If you didn’t have access to the standby database and you wanted to find out what error has occurred in a data guard configuration, what view would you check in the primary database to check the error message?
You can check the v$dataguard_status view.
select message from v$dataguard_status;
9. How can u recover standby which far behind from primary (or) without archive logs how can we make standby sync?
By using RMAN incremental backup.
10. What is snapshot standby (or) How can we give a physical standby to user in READ WRITE mode and let him do updates and revert back to standby?
Till Oralce 10g, create guaranteed restore point, open in read write, let him do updates, flashback to restore point, start MRP.
From Oracle 11g, convert physical standby to snapshot standby, let him do updates, convert to physical standby, start MRP.
11. What are new features in 11g Data Guard?

12. What are the uses of standby redo log files?

13. What is dg_config?
14. What is RTA (real time apply) mode MRP?

15. What is the difference between normal MRP (managed apply) and RTA MRP (real time apply)?

16. What are various parameters in log_archive_dest and it’s use?

17. What is the difference between SYNC/ASYNC, LGWR/ARCH, and AFFIRM/NOAFFIRM?
18. What is Data Guard broker (or) what is the use of dgmgrl?

19. What is StaticConnectIdentifier property used for?

20. What is failover/switchover (or) what is the difference between failover & switchover?

21. What are the background processes involved in Data Guard?
MRP, LSP,
Oracle RMAN Interview Questions/FAQs
1. Difference between catalog and nocatalog?
Make money bloggingEmailRecover FilesHeat PumpDatabase management system
2. Difference between using recovery catalog and control file?
When new incarnation happens, the old backup information in control file will be lost. It will be preserved in recovery catalog.
In recovery catalog, we can store scripts.
Recovery catalog is central and can have information of many databases.
3. Can we use same target database as catalog?
No. The recovery catalog should not reside in the target database (database to be backed up), because the database can’t be recovered in the mounted state.
4. How do u know how much RMAN task has been completed?
By querying v$rman_status or v$session_longops
5. From where list & report commands will get input?
6. Command to delete archive logs older than 7days?
RMAN> delete archivelog all completed before sysdate-7;
7. How many days backup, by default RMAN stores?
8. What is the use of crosscheck command in RMAN?
Crosscheck will be useful to check whether the catalog information is intact with OS level information.
9. What are the differences between crosscheck and validate commands?
10. Which is one is good, differential (incremental) backup or cumulative (incremental) backup?
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0
11. What is Level 0, Level 1 backup?
A level 0 incremental backup, which is the base for subsequent incremental backups, copies all blocks containing data, backing the datafile up into a backup set just as a full backup would. A level 1 incremental backup can be either of the following types:
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0
12. Can we perform level 1 backup without level 0 backup?
If no level 0 backup is available, then the behavior depends upon the compatibility mode setting. If compatibility < 10.0.0, RMAN generates a level 0 backup of the file contents at the time of the backup. If compatibility is >= 10.0.0, RMAN copies all blocks changed since the file was created, and stores the results as a level 1 backup. In other words, the SCN at the time the incremental backup is taken is the file creation SCN.
13. Will RMAN put the database/tablespace/datafile in backup mode?
Nope.
14. What is snapshot control file?
15. What is the difference between backup set and backup piece?
Backup set is logical and backup piece is physical.
16. RMAN command to backup for creating standby database?
RMAN> duplicate target database to standby database ….
17. How to do cloning by using RMAN?
RMAN> duplicate target database …
18. You loss one datafile and DB is running in ARCHIVELOG mode. You have full database backup of 1 week/day old and don’t have backup of this (newly created) datafile. How do you restore/recover file?
create the datafile and recover that datafile.
SQL> alter database create datafile ‘…path..’ size n;
RMAN> recover datafile file_id;
19. What is obsolete backup & expired backup?
A status of “expired” means that the backup piece or backup set is not found in the backup destination.
A status of “obsolete” means the backup piece is still available, but it is no longer needed. The backup piece is no longer needed since RMAN has been configured to no longer need this piece after so many days have elapsed, or so many backups have been performed.
20. What is the difference between hot backup & RMAN backup?
For hot backup, we have to put database in begin backup mode, then take backup.
RMAN won’t put database in backup mode.
21. How to put manual/user-managed backup in RMAN (recovery catalog)?
By using catalog command.
RMAN> CATALOG START WITH ‘/tmp/backup.ctl’;
22. What are new features in Oracle 11g RMAN?
23. What is the difference between auxiliary channel and maintenance channel?

Oracle Export/Import (exp/imp)- Data Pump (expdp/imp) Interview Questions
1. What is use of CONSISTENT option in exp?
Cross-table consistency. Implements SET TRANSACTION READ ONLY. Default value N.
2. What is use of DIRECT=Y option in exp?
Setting direct=yes, to extract data by reading the data directly, bypasses the SGA, bypassing the SQL command-processing layer (evaluating buffer), so it should be faster. Default value N.
3. What is use of COMPRESS option in exp?
Imports into one extent. Specifies how export will manage the initial extent for the table data. This parameter is helpful during database re-organization. Export the objects (especially tables and indexes) with COMPRESS=Y. If table was spawning 20 Extents of 1M each (which is not desirable, taking into account performance), if you export the table with COMPRESS=Y, the DDL generated will have initial of 20M. Later on when importing the extents will be coalesced. Sometime it is found desirable to export with COMPRESS=N, in situations where you do not have contiguous space on disk (tablespace), and do not want imports to fail.
4. How to improve exp performance?
1. Set the BUFFER parameter to a high value. Default is 256KB.
2. Stop unnecessary applications to free the resources.
3. If you are running multiple sessions, make sure they write to different disks.
4. Do not export to NFS (Network File Share). Exporting to disk is faster.
5. Set the RECORDLENGTH parameter to a high value.
6. Use DIRECT=yes (direct mode export).
5. How to improve imp performance?
1. Place the file to be imported in separate disk from datafiles.
2. Increase the DB_CACHE_SIZE.
3. Set LOG_BUFFER to big size.
4. Stop redolog archiving, if possible.
5. Use COMMIT=n, if possible.
6. Set the BUFFER parameter to a high value. Default is 256KB.
7. It’s advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import. Indexes can easily be recreated after the data was successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle 11g) during import.
6. What is use of INDEXFILE option in imp?
Will write DDLs of the objects in the dumpfile into the specified file.
7. What is use of IGNORE option in imp?
Will ignore the errors during import and will continue the import.
8. What are the differences between expdp and exp (Data Pump or normal exp/imp)?
Data Pump is server centric (files will be at server).
Data Pump has APIs, from procedures we can run Data Pump jobs.
In Data Pump, we can stop and restart the jobs.
Data Pump will do parallel execution.
Tapes & pipes are not supported in Data Pump.
Data Pump consumes more undo tablespace.
Data Pump import will create the user, if user doesn’t exist.
9. Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?
Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API.
10. How to improve expdp performance?
Using parallel option which increases worker threads. This should be set based on the number of cpus.
11. How to improve impdp performance?
Using parallel option which increases worker threads. This should be set based on the number of cpus.
12. In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump, how it will know from where to resume?
Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has completed and from where to continue etc.
Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
13. What is the order of importing objects in impdp?
Tablespaces
Users
Roles
Database links
Sequences
Directories
Synonyms
Types
Tables/Partitions
Views
Comments
Packages/Procedures/Functions
Materialized views
14. How to import only metadata?
CONTENT= METADATA_ONLY
15. How to import into different user/tablespace/datafile/table?
REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA
16. How to export/import without using external directory?
17. Using Data Pump, how to export in higher version (11g) and import into lower version (10g), can we import to 9i?
18. Using normal exp/imp, how to export in higher version (11g) and import into lower version (10g/9i)?
19. How to do transport tablespaces (and across platforms) using exp/imp or expdp/impdp?
Oracle RAC Interview Questions
1. What is the use of RAC?

2. What are the prerequisites for RAC setup?

3. What are Oracle Clusterware/Daemon processes and what they do?
Ans:
ocssd, crsd, evmd, oprocd, racgmain, racgimon
4. What are the special background processes for RAC (or) what is difference in stand-alone database & RAC database background processes?
DIAG, LCKn, LMD, LMSn, LMON
5. What are structural changes in 11g R2 RAC?
Ans:
Grid & ASM are on one home,
Voting disk & ocrfile can be on the ASM,
SCAN,
By using srvctl, we can mange diskgroups, home, ons, eons, filesystem, srvpool, server, scan, scan_listener, gns, vip, oc4j,
GSD
6. What are the new features in 11g (R2) RAC?
Ans:
Grid & ASM are on one home,
Voting disk & ocrfile can be on the ASM,
SCAN,
By using srvctl, we can mange diskgroups, home, ons, eons, filesystem, srvpool, server, scan, scan_listener, gns, vip, oc4j,
GSD
7. What is cache fusion?
Ans:
Transferring of data between RAC instances by using private network. Cache Fusion is the remote memory mapping of Oracle buffers, shared between the caches of participating nodes in the cluster. When a block of data is read from datafile by an instance within the cluster and another instance is in need of the same block, it is easy to get the block image from the instance which has the block in its SGA rather than reading from the disk.
8. What is the purpose of Private Interconnect?
Ans:
Clusterware uses the private interconnect for cluster synchronization (network heartbeat) and daemon communication between the clustered nodes. This communication is based on the TCP protocol. RAC uses the interconnect for cache fusion (UDP) and inter-process communication (TCP).
9. What are the Clusterware components?
Ans:
Voting Disk – Oracle RAC uses the voting disk to manage cluster membership by way of a health check and arbitrates cluster ownership among the instances in case of network failures. The voting disk must reside on shared disk.
Oracle Cluster Registry (OCR) – Maintains cluster configuration information as well as configuration information about any cluster database within the cluster. The OCR must reside on shared disk that is accessible by all of the nodes in your cluster. The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.
Virtual IP (VIP) – When a node fails, the VIP associated with it is automatically failed over to some other node and new node re-arps the world indicating a new MAC address for the IP. Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients. This results in the clients getting errors immediately.
crsd – Cluster Resource Services Daemon
cssd – Cluster Synchronization Services Daemon
evmd – Event Manager Daemon
oprocd / hangcheck_timer – Node hang detector
10. What is OCR file?
Ans:
RAC configuration information repository that manages information about the cluster node list and instance-to-node mapping information. The OCR also manages information about Oracle Clusterware resource profiles for customized applications. Maintains cluster configuration information as well as configuration information about any cluster database within the cluster. The OCR must reside on shared disk that is accessible by all of the nodes in your cluster. The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.
11. What is Voting file/disk and how many files should be there?
Ans:
Voting Disk File is a file on the shared cluster system or a shared raw device file. Oracle Clusterware uses the voting disk to determine which instances are members of a cluster. Voting disk is akin to the quorum disk, which helps to avoid the split-brain syndrome. Oracle RAC uses the voting disk to manage cluster membership by way of a health check and arbitrates cluster ownership among the instances in case of network failures. The voting disk must reside on shared disk.
12. How to take backup of OCR file?
Ans:
#ocrconfig -manualbackup
#ocrconfig -export file_name.dmp
#ocrdump -backupfile my_file
$cp -p -R /u01/app/crs/cdata /u02/crs_backup/ocrbackup/RAC1
13. How to recover OCR file?
Ans:
#ocrconfig -restore backup_file.ocr
#ocrconfig -import file_name.dmp
14. What is local OCR?
Ans:
/etc/oracle/local.ocr
/var/opt/oracle/local.ocr
15. How to check backup of OCR files?
Ans:
#ocrconfig –showbackup
16. How to take backup of voting file?
Ans:
dd if=/u02/ocfs2/vote/VDFile_0 of=$ORACLE_BASE/bkp/vd/VDFile_0
crsctl backup css votedisk         — from 11g R2
17. How do I identify the voting disk location?
Ans:
# crsctl query css votedisk
18. How do I identify the OCR file location?
check /var/opt/oracle/ocr.loc or /etc/ocr.loc
Ans:
# ocrcheck
19. If voting disk/OCR file got corrupted and don’t have backups, how to get them?
Ans:
We have to install Clusterware.
20. Who will manage OCR files?
Ans:
cssd will manage OCR
21. Who will take backup of OCR files?
Ans:
crsd will take backup.
22. What is split brain syndrome?
Ans:
Will arise when two or more instances attempt to control a cluster database. In a two-node environment, one instance attempts to manage updates simultaneously while the other instance attempts to manage updates.
23. What are various IPs used in RAC? Or How may IPs we need in RAC?
Ans:
Public IP, Private IP, Virtual IP, SCAN IP
24. What is the use of virtual IP?
Ans:
When a node fails, the VIP associated with it is automatically failed over to some other node and new node re-arps the world indicating a new MAC address for the IP. Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients. This results in the clients getting errors immediately.
Without using VIPs or FAN, clients connected to a node that died will often wait for a TCP timeout period (which can be up to 10 min) before getting an error. As a result, you don’t really have a good HA solution without using VIPs.
25. What is the use of SCAN IP (SCAN name) and will it provide load balancing?
Ans:
Single Client Access Name (SCAN) is a new Oracle Real Application Clusters (RAC) 11g Release 2, feature that provides a single name for clients to access an Oracle Database running in a cluster. The benefit is clients using SCAN do not need to change if you add or remove nodes in the cluster.
26. How many SCAN listeners will be running?
Ans:
Three SCAN listeners only.
27. What is FAN?
Ans:
Applications can use Fast Application Notification (FAN) to enable rapid failure detection, balancing of connection pools after failures, and re-balancing of connection pools when failed components are repaired. The FAN process uses system events that Oracle publishes when cluster servers become unreachable or if network interfaces fail.
28. What is FCF?
Ans:
Fast Connection Failover provides high availability to FAN integrated clients, such as clients that use JDBC, OCI, or ODP.NET. If you configure the client to use fast connection failover, then the client automatically subscribes to FAN events and can react to database UP and DOWN events. In response, Oracle gives the client a connection to an active instance that provides the requested database service.
29. What is TAF and TAF policies?
Ans:
Transparent Application Failover (TAF) – A runtime failover for high availability environments, such as Real Application Clusters and Oracle Real Application Clusters Guard, TAF refers to the failover and re-establishment of application-to-service connections. It enables client applications to automatically reconnect to the database if the connection fails, and optionally resume a SELECT statement that was in progress. This reconnect happens automatically from within the Oracle Call Interface (OCI) library.
30. How will you upgrade RAC database?

31. What are rolling patches and how to apply?

32. How to add/remove a node?

33. What are nodeapps?
Ans:
VIP, listener, ONS, GSD
34. What is gsd (Global Service Daemon)?

35. How to do load balancing in RAC?

36. What are the uses of services? How to find out the services in cluster?
Ans:
Applications should use the services to connect to the Oracle database. Services define rules and characteristics (unique name, workload balancing, failover options, and high availability) to control how users and applications connect to database instances.
37. How to find out the nodes in cluster (or) how to find out the master node?
Ans:
# olsnodes  — Which ever displayed first, is the master node of the cluster.
select MASTER_NODE from v$ges_resource;
To find out which is the master node, you can see ocssd.log file and search for “master node number”.
38. How to know the public IPs, private IPs, VIPs in RAC?
Ans:
# olsnodes -n -p -i
node1-pub       1       node1-prv       node1-vip
node2-pub       2       node2-prv       node2-vip
39. What utility is used to start DB/instance?
Ans:
srvctl start database –d database_name
srvctl start instance –d database_name –i instance_name
40. How can you shutdown single instance?
Ans:
Change cluster_database=false
srvctl stop instance –d database_name –i instance_name
41. What is HAS (High Availability Service) and the commands?
Ans:
HAS includes ASM & database instance and listeners.
crsctl check has
crsctl config has
crsctl disable has
crsctl enable has
crsctl query has releaseversion
crsctl query has softwareversion
crsctl start has
crsctl stop has [-f]
42. How many nodes are supported in a RAC Database?
Ans:
10g Release 2, support 100 nodes in a cluster using Oracle Clusterware, and 100 instances in a RAC database.
43. What is fencing?
Ans:
I/O fencing prevents updates by failed instances, and detecting failure and preventing split brain in cluster. When a cluster node fails, the failed node needs to be fenced off from all the shared disk devices or diskgroups. This methodology is called I/O Fencing, sometimes called Disk Fencing or failure fencing.
44. Why Clusterware installed in root (why not oracle)?

45. What are the wait events in RAC?
Ans:
gc buffer busy
gc buffer busy acquire
gc current request
gc cr request
gc cr failure
gc current block lost
gc cr block lost
gc current block corrupt
gc cr block corrupt
gc current block busy
gc cr block busy
gc current block congested
gc cr block congested.
gc current block 2-way
gc cr block 2-way
gc current block 3-way
gc cr block 3-way
(gc current/cr block n-way, n is number of nodes)
gc current grant 2-way
gc cr grant 2-way
gc current grant busy
gc current grant congested
gc cr grant congested
gc cr multi block read
gc current multi block request
gc cr multi block request
gc cr block build time
gc current block flush time
gc cr block flush time
gc current block send time
gc cr block send time
gc current block pin time
gc domain validation
gc current retry
ges inquiry response
gcs log flush sync
46. What is the difference between cr block and cur (current) block?

47. What are the initialization parameters that must have same value for every instance in an Oracle RAC database?
Ans:
ACTIVE_INSTANCE_COUNT
ARCHIVE_LAG_TARGET
COMPATIBLE
CLUSTER_DATABASE
CLUSTER_DATABASE_INSTANCE
CONTROL_FILES
DB_BLOCK_SIZE
DB_DOMAIN
DB_FILES
DB_NAME
DB_RECOVERY_FILE_DEST
DB_RECOVERY_FILE_DEST_SIZE
DB_UNIQUE_NAME
INSTANCE_TYPE
PARALLEL_MAX_SERVERS
REMOTE_LOGIN_PASSWORD_FILE
UNDO_MANAGEMENT
47. What are the new features in Oracle RAC 12c?
Ans:
48. What is the use of root.sh & oraInstRoot.sh?
Ans:
Changes ownership & permissions of oraInventory
Creating oratab file in the /etc directory
In RAC, starts the clusterware stack
49. What is transportable tablespace (and across platforms)?

50. How can you transport tablespaces across platforms with different endian formats?
Ans:
RMAN
51. What is xtss (cross platform transportable tablespace)?

52. What is the difference between restore point & guaranteed restore point?

53. What is the difference between 10g/11g OEM Grid control and 12c Cloud control?

54. What are the components of Grid control?
Ans:
OMS (Oracle Management Server)
OMR (Oracle Management Repository)
OEM Agent
55. What are the new features of 12c Cloud control?

56. How to find if your Oracle database is 32 bit or 64 bit?
Ans:
execute the command “file $ORACLE_HOME/bin/oracle”, you should see output like /u01/db/bin/oracle: ELF 64-bit MSB executable SPARCV9 Version 1
means you are on 64 bit oracle.
If your oracle is 32 bit you should see output like below
oracle: ELF 32-bit MSB executable SPARC Version 1
57. How to find opatch Version ?
Ans:
opatch is utility to apply database patch, In order to find opatch version execute”$ORACLE_HOME/OPatch/opatch version”
Oracle DBA Interview Questions/FAQs Part1
1. What is an instance?
SGA + background processes.
2. What is SGA?
System/Shared Global Area.
3. What is PGA (or) what is pga_aggregate_target?
Programmable Global Area.
4. What are new memory parameters in Oracle 10g?
SGA_TARGET PGA_TARGET
5. What are new memory parameters in Oracle 11g?
MEMORY_TARGET
6. What are the mandatory background processes?
DBWR LGWR SMON PMON CKPT RECO.
7. What are the optional background processes?
ARCH, MMAN, MMNL, MMON, CTWR, ASMB, RBAL, ARBx etc.
8. What are the new background processes in Oracle 10g?
MMAN MMON MMNL CTWR ASMB RBAL ARBx
9. What are the new features in Oracle 9i?
10. What are the new features in Oracle 10g?
11. What are the new features in Oracle 11g?
12. What are the new features in Oracle 11g R2?
13. What are the new features in Oracle 12c?
14. What process will get data from datafiles to DB cache?
Server process
15. What background process will writes data to datafiles?
DBWR
16. What background process will write undo data?
DBWR
17. What are physical components of Oracle database?
Oracle database is comprised of three types of files. One or more datafiles, two or more redo log files, and one or more control files. Password file and parameter file also come under physical components.
18. What are logical components of Oracle database?
Blocks, Extents, Segments, Tablespaces.
19. What is segment space management?
LMTS and DMTS.
20. What is extent management?
Auto and Manual.
21. What are the differences between LMTS and DMTS?
Tablespaces that record extent allocation in the dictionary are called dictionary managed tablespaces, and tablespaces that record extent allocation in the tablespace header are called locally managed tablespaces.
October 22, 2012Oracle DBA Interview Questions/FAQs Part2
Oracle DBA Interview Questions/FAQs Part2
21. What is a datafile?
Every Oracle database has one or more physical datafiles. Datafiles contain all the database data. The data of logical database structures such as tables and indexes is physically stored in the datafiles allocated for a database.
22. What are the contents of control file?
Database name, SCN, LSN, datafile locations, redolog locations, archive mode, DB Creation Time, RMAN Backup & Recovery Details, Flashback mode.
23. What is the use of redo log files?

24. What are the uses of undo tablespace or redo segments?

25. How undo tablespace can guarantee retain of required undo data?
Alter tablespace undo_ts retention guarantee;
26. What is ORA-01555 – snapshot too old error and how do you avoid it?

27. What is the use/size of temporary tablespace?

28. What is the use of password file?

29. How to create password file?
$ orapwd file=orapwSID password=sys_password force=y nosysdba=y
30. How many types of indexes are there?
Clustered and Non-Clustered
1.B-Tree index
2.Bitmap index
3.Unique index
4.Function based index
Implicit index and explicit index.
Explicit indexes are again of many types like simple index, unique index, Bitmap index, Functional index, Organisational index, cluster index.
31. What is bitmap index & when it’ll be used?
Bitmap indexes are preferred in Data warehousing environment.
Preferred when cardinality is low.
32. What is B-tree index & when it’ll be used?
B-tree indexes are preferred in OLTP environment.
Preferred when cardinality is high.
33. How you will find out fragmentation of index?
AUTO_SPACE_ADVISOR_JOB will run in daily maintenance window and report fragmented indexes/Tables.
analyze index validate structure;
This populates the table ‘index_stats’. It should be noted that this table contains only one row and therefore only one index can be analysed at a time.
An index should be considered for rebuilding under any of the following conditions:
* The percentage of deleted rows exceeds 30% of the total, i.e. if del_lf_rows / lf_rows > 0.3.
* If the ‘HEIGHT’ is greater than 4.
* If the number of rows in the index (‘LF_ROWS’) is significantly smaller than ‘LF_BLKS’ this can indicate a large number of deletes, indicating that the index should be rebuilt.
34. What is the difference between delete and truncate?
Truncate will release the space. Delete won’t.
Delete can be used to delete some records. Truncate can’t.
Delete can be rollbacked.
Delete will generate undo (Delete command will log the data changes in the log file where as the truncate will simply remove the data without it. Hence data removed by Delete command can be rolled back but not the data removed by TRUNCATE).
Truncate is a DDL statement whereas DELETE is a DML statement.
Truncate is faster than delete.
35. What’s the difference between a primary key and a unique key?
Both primary key and unique enforce uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where unique key creates a nonclustered index by default. Primary key doesn’t allow NULLs, but unique key allows one NULL only.
36. What is the difference between schema and user?
Schema is collection of user’s objects.
37. What is the difference between SYSDBA, SYSOPER and SYSASM?
SYSOPER can’t create and drop database.
SYSOPER can’t do incomplete recovery.
SYSOPER can’t change character set.
SYSOPER can’t CREATE DISKGROUP, ADD/DROP/RESIZE DISK
SYSASM can do anything SYSDBA can do.
38. What is the difference between SYS and SYSTEM?
SYSTEM can’t shutdown the database.
SYSTEM can’t create another SYSTEM, but SYS can create another SYS or SYSTEM.
39. How to improve sqlldr (SQL*Loader) performance?

40. What is the difference between view and materialized view?
View is logical, will store only the query, and will always gets latest data.
Mview is physical, will store the data, and may not get latest data.
41. What are materialized view refresh types and which is default?
Complete, fast, force(default)
42. How fast refresh happens?

43. How to find out when was a materialized view refreshed?
Query dba_mviews or dba_mview_analysis or dba_mview_refresh_times
SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,’YYYY-MM-DD HH24:MI:SS’) from dba_mviews;
(or)
SQL> select NAME, to_char(LAST_REFRESH,’YYYY-MM-DD HH24:MI:SS’) from dba_mview_refresh_times;
(or)
SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,’YYYY-MM-DD HH24:MI:SS’) from dba_mview_analysis;
44. What is materialized view log (type)?

45. What is atomic refresh in mviews?
From Oracle 10g, complete refresh of single materialized view can do delete instead of truncate. To force the refresh to do truncate instead of delete, parameter ATOMIC_REFRESH must be set to false.
ATOMIC_REFRESH = FALSE, mview will be truncated and whole data will be inserted. The refresh will go faster, and no undo will be generated.
ATOMIC_REFRESH = TRUE (default), mview will be deleted and whole data will be inserted. Undo will be generated. We will have access at all times even while it is being refreshed.
SQL> EXEC DBMS_MVIEW.REFRESH(‘mv_emp’, ‘C’, atomic_refresh=FALSE);
46. How to find out whether database/tablespace/datafile is in backup mode or not?
Query V$BACKUP view.
47. What is row chaining?
If the row is too large to fit into an empty data block in this case the oracle stores the data for the row in a chain of one or more data blocks. Can occur when the row is inserted.
48. What is row migration?
An update statement increases the amount of data in a row so that the row no longer fits in its data blocks. Now the oracle tries to find another free block with enough space to hold the entire row if such a block is available oracle moves entire row to new block.
49. What are different types of partitions?
With Oracle8, Range partitioning (on single column) was introduced.
With Oracle8i, Hash and Composite(Range-Hash) partitioning was introduced.
With Oracle9i, List partitioning and Composite(Range-List) partitioning was introduced.
With Oracle 11g, Interval partitioning, REFerence partitioning, Virtual column based partitioning, System partitioning and Composite partitioning [Range-Range, List-List, List-Range, List-Hash, Interval-Range, Interval-List, Interval-Interval] was introduced.
50. What is local partitioned index and global partitioned index?
A local index is an index on a partitioned table which is partitioned in the exact same manner as the underlying partitioned table. Each partition of a local index corresponds to one and only one partition of the underlying table.
A global partitioned index is an index on a partitioned or non partitioned tables which are partitioned using a different partitioning key from the table and can have different number of partitions. Global partitioned indexes can only be partitioned using range partitioning.
51. How you will recover if you lost one/all control file(s)?

52. Why more archivelogs are generated, when database is begin backup mode?
During begin backup mode datafile headers get freezed and as result row information cannot be retrieved as a result the entire block is copied to redo logs as a result more redo generated and more log switch and in turn more archive logs. Normally only deltas (change vectors) are logged to the redo logs. When in backup mode, Oracle will write complete changed blocks to the redo log files.
Mainly to overcome fractured blocks. Most of the cases Oracle block size is equal to or a multiple of the operating system block size.
e.g. Consider Oracle blocksize is 2k and OSBlocksize is 4k. so each OS Block is comprised of 2 Oracle Blocks. Now you are doing an update when your db is in backup mode. An Oracle Block is updating and at the same time backup is happening on the OS block which is having this particular DB block. Backup will not be consistent since the one part of the block is being updated and at the same time it is copied to the backup location. In this case we will have a fractured block, so as to avoid this Oracle will copy the whole OS block to redo logfile which can be used for recovery. Because of this redo generation is more.
53. What UNIX parameters you will set while Oracle installation?
shmmax, shmmni, shmall, sem,
54. What is the use of inittrans and maxtrans in table definition?

55. What are differences between dbms_job and dbms_schedular?
Through dbms_schedular we can schedule OS level jobs also.
56. What are differences between dbms_schedular and cron jobs?
Through dbms_schedular we can schedule database jobs, through cron we can’t set.
57. Difference between CPU & PSU patches?
CPU – Critical Patch Update – includes only Security related patches.
PSU – Patch Set Update – includes CPU + other patches deemed important enough to be released prior to a minor (or major) version release.
58. What you will do if (local) inventory corrupted [or] opatch lsinventory is giving error?

59. What are the entries/location of oraInst.loc?
/etc/oraInst.loc is pointer to central/local Oracle Inventory.
60. What is the difference between central/global inventory and local inventory?
October 22, 2012Oracle DBA Interview Questions/FAQs Part4
61. What is the use of root.sh & oraInstRoot.sh?
Ans:
Changes ownership & permissions of oraInventory
Creating oratab file in the /etc directory
In RAC, starts the clusterware stack
62. What is transportable tablespace (and across platforms)?

63. How can you transport tablespaces across platforms with different endian formats?
Ans:
RMAN
64. What is xtss (cross platform transportable tablespace)?

65. What is the difference between restore point & guaranteed restore point?

66. What is the difference between 10g/11g OEM Grid control and 12c Cloud control?

67. What are the components of Grid control?
Ans:
OMS (Oracle Management Server)
OMR (Oracle Management Repository)
OEM Agent
68. What are the new features of 12c Cloud control?

69. How to find if your Oracle database is 32 bit or 64 bit?
Ans:
execute the command “file $ORACLE_HOME/bin/oracle”, you should see output like /u01/db/bin/oracle: ELF 64-bit MSB executable SPARCV9 Version 1
means you are on 64 bit oracle.
If your oracle is 32 bit you should see output like below
oracle: ELF 32-bit MSB executable SPARC Version 1
70. How to find opatch Version ?
Ans:
opatch is utility to apply database patch, In order to find opatch version execute”$ORACLE_HOME/OPatch/opatch version”
Oracle Performance Related Interview Questions/FAQs
Oracle Performance Related Interview Questions/FAQs
1. What you’ll check whenever user complains that his session/database is slow?
Make money bloggingDatabase management systemEmailMessagesBlog
Oracle monitoring tools
2. What is the use of statistics?

3. How to generate explain plan?

4. How to check explain plan of already ran SQLs?

5. How to find out whether the query has ran with RBO or CBO?

6. What are top 5 wait events (in AWR report) and how you will resolve them?
db file sequential read  => tune indexing, tune SQL (to do less I/O), tune disks, increase buffer cache. This event is indicative of disk contention on index reads. Make sure all objects are analyzed. Redistribute I/O across disks. The wait that comes from the physical side of the database. It related to memory starvation and non selective index use. Sequential read is an index read followed by table read because it is doing index lookups which tells exactly which block to go to.
db file scattered read => disk contention on full table scans. Add indexes, tune SQL, tune disks, refresh statistics, and create materialized view. Caused due to full table scans may be because of insufficient indexes or unavailability of updated statistics.
db file parallel read  => tune SQL, tune indexing, tune disk I/O, increase buffer cache. If you are doing a lot of partition activity then expect to see that wait even. It could be a table or index partition.
db file parallel write  => if you are doing a lot of partition activity then expect to see that wait even. It could be a table or index partition.
db file single write  => if you see this event than probably you have a lot of data files in your database.
control file sequential read
control file parallel write
log file sync    => committing too often, archive log generation is more. Tune applications to commit less, tune disks where redo logs exist, try using nologging/unrecoverable options, log buffer could be too large.
log file switch completion => May need more log files per group.
log file parallel write  => Deals with flushing out the redo log buffer to disk. Disks may be too slow or have an I/O bottleneck. Look for log file contention.
log buffer space   => Increase LOG_BUFFER parameter or move log files to faster disks. Tune application, use NOLOGGING, and look for poor behavior that updates an entire row when only a few columns change.
log file switch (checkpoint incomplete) => May indicate excessive db files or slow IO subsystem.
log file switch (archiving needed)   => Indicates archive files are written too slowly.
redo buffer allocation retries  => shows the number of times a user process waited for space in the redo log buffer.
redo log space wait time  => shows cumulative time (in 10s of milliseconds) waited by all processes waiting for space in the log buffer.
buffer busy waits/ read by other session  => Increase DB_CACHE_SIZE. Tune SQL, tune indexing, we often see this event along with full table scans, if the SQL is inserting data, consider increasing FREELISTS and/or INITRANS, if the waits are on segment header blocks, consider increasing extent sizes.
free buffer waits  => insufficient buffers, process holding buffers too long or i/o subsystem is over loaded. Also check you db writes may be getting clogged up.
cache buffers lru chain  => Freelist issues, hot blocks.
no free buffers   => Insufficient buffers, dbwr contention.
latch free
latch: session allocation
latch: in memory undo latch  => If excessive could be bug, check for your version, may have to turn off in memory undo.
latch: cache buffer chains  => check hot objects.
latch: cache buffer handles  => Freelist issues, hot blocks.
direct path write => You wont see them unless you are doing some appends or data loads.
direct Path reads => could happen if you are doing a lot of parallel query activity.
direct path read temp or direct path write temp => this wait event shows Temp file activity (sort,hashes,temp tables, bitmap) check pga parameter or sort area or hash area parameters. You might want to increase them.
library cache load lock
library cache pin => if many sessions are waiting, tune shared pool, if few sessions are waiting, lock is session specific.
library cache lock  => need to find the session holding the lock, look for DML manipulating an object being accessed, if the session is trying to recompile PL/SQL, look for other sessions executing the code.
undo segment extension  => If excessive, tune undo.
wait for a undo record   => Usually only during recovery of large transactions, look at turning off parallel undo recovery.
enque wait events   => Look at V$ENQUEUE_STAT
SQL*Net message from client
SQL*Net message from dblink
SQL*Net more data from client
SQL*Net message to client
SQL*Net break/reset to client
7. What are the init parameters related to performance/optimizer?
optimizer_mode = choose
optimizer_index_caching = 90
optimizer_index_cost_adj = 25
optimizer_max_permutations = 100
optimizer_use_sql_plan_baselines=true
optimizer_capture_sql_plan_baselines=true
optimizer_use_pending_statistics = true;
optimizer_use_invisible_indexes=true
_optimizer_connect_by_cost_based=false
_optimizer_compute_index_stats= true;
8. What are the values of optimizer_mode init parameters and their meaning?
optimizer_mode = choose
9. What is the use of AWR, ADDM, ASH?

10. How to generate AWR report and what are the things you will check in the report?

11. How to generate ADDM report and what are the things you will check in the report?

12. How to generate ASH report and what are the things you will check in the report?

13. How to generate STATSPACK report and what are the things you will check in the report?

14. How to generate TKPROF report and what are the things you will check in the report?
The tkprof tool is a tuning tool used to determine cpu and execution times for SQL statements. Use it by first setting timed_statistics to true in the initialization file and then turning on tracing for either the entire database via the sql_trace parameter or for the session using the ALTER SESSION command. Once the trace file is generated you run the tkprof tool against the trace file and then look at the output from the tkprof tool. This can also be used to generate explain plan output.
Oracle GoldenGate Interview Questions/FAQs
Oracle GoldenGate Interview Questions/FAQs
1. What is GoldenGate and how to setup GoldenGate?

2. What are processes/components in GoldenGate?
Manager, Extract, Replicat, Data Pump
3. What is Data Pump process in GoldenGate?

4. What is the command line utility in GoldenGate (or) what is ggsci?

5. What is the default port for GoldenGate Manager process?
7809
6. What are important files GoldenGate?
GLOBALS, ggserr.log, dirprm, etc …
7. What is checkpoint table?

8. How can you see GoldenGate errors?
ggsci> VIEW GGSEVT
ggserr.log file
UNIX Interview Questions/FAQs for Oracle DBAs
1. What’s the difference between soft link and hard link?
Ans:
A symbolic (soft) linked file and the targeted file can be located on the same or different file system while for a hard link they must be located on the same file system, because they share same inode number and an inode table is unique to a file system, both must be on the same file system.
2. How you will read a file from shell script?
Ans:
while read line
do
echo $line
done < file_name
3. What’s the use of umask?
Will decide the default permissions for files.
4. What is crontab and what are the arguments?
Ans:
The entries have the following elements:
field             allowed values
—–             ————–
minute            0-59
hour                0-23
day of month   1-31
month             1-12
day of week     0-7 (both 0 and 7 are Sunday)
user                 Valid OS user
command         Valid command or script
? ? ? ? ? command
|  | |  | |_________day of the week (0-6, 0=Sunday)
|  | |  |___________month (1-12)
|  | |_____________day of the month (1-31)
|  |_______________hour (0-23)
|_________________minute (0-59)
5. How to find operating system (OS) version?
Ans:
uname –a
6. How to find out the run level of the user?
Ans:
uname –r
7. How to delete 7 days old trace files?
Ans:
find ./trace –name *.trc –mtime +7 –exec rm {} \;
8. How to get 10th line of a file (by using grep)?

9. (In Solaris) how to find out whether it’s 32bit or 64bit?

10. What is paging?

11. What is top command?
Ans:
top is a operating system command, it will display top processes which are taking high cpu and memory.
12. How to find out the status of last command executed?
Ans:
$?
13. How to find out number of arguments passed to a shell script?
Ans:
$#
14. What is the default value of umask?
Ans:
022
15. How to add user in Solaris/Linux?
Ans:
useradd command

  How to Change Instance Type & Security Group of EC2 in AWS By David Taylor Updated April 29, 2023 EC2 stands for Elastic Compute Cloud...