Clearing the RAM on server

This post is extension of my previous post

Sometimes situation arise when we try to start our db and we get error like not sufficient memory available, so unable to start our database. so one other option that we have is if there are no other DB on the host machine and if it is a Linux OS, then you can use the below command to clear the RAM without restarting the server machine –

“sync; echo 1 > /proc/sys/vm/drop_caches”    
The above command can be used in Production servers to clear RAM without restarting the macine.

For more information please read the below article

How to Flush Memory Cache on Linux Server

(Please be Really careful while trying other options)


using SED command to truncate large trace/log files

In production environment we have to maintain trace files of past 15/30/45 days

depending on the retention period, so if we see a log/trace file consuming more space

then instead of deleting that file you might wanna truncate the file and retain only latest ‘N’ number of lines. one good way to do this is by using ‘SED’ utility.

for example:

oracle@accord:/n01/oraflash1> ls
oracle@accord:/n01/oraflash1> du -sh *
445M logsweep_ora_201603.log
oracle@accord:/n01/oraflash1> sed -i -e :a -e ‘$q;N;8000,$D;ba’ logsweep_ora_201603.log
oracle@accord:/n01/oraflash1> du -sh logsweep_ora_201603.log
804K logsweep_ora_201603.log


How to delete user account in Redhat/Ubuntu Linux?

Example1: Delete user account from a machine

userdel username


userdel surendra

The disadvantage of the above command is that it only deletes users login details but not his home directory.

Example2: Deleting user account and his home directory from a Linux machine

userdel -r username

-r stands for remove home directory as well

The disadvantage of above two commands are that they will not delete user files which are scattered across the machine such as his personal files, his mail spool etc.

There is an excellent user deleting command with more options such as deleting entire user files, the command is deluser. We will see some examples of this command

Example3: Delete user account forcefully though user logged in.

deluser –force username

Example4: Delete user account along with his home directory

deluser –remove-home username

Example5: Delete user account along with his home directory and his personal files which are located in different locations which you are not aware.

deluser –remove-all-files username

Example6: Delete user account and take backup of his files to a directory for future use by the company.

deluser –backup-to DIR username

Example: I want to delete user account Don and take backup of all his files to /var/backup

deluser –backup-to /var/backup don

Note: When you take backup, the backup file is created as /var/backup/don.tar.gz file.

What Is Export In Oracle

What is export ORACLE_SID

export, set and setenv commands are used in UNIX for setting value of a variable or an environment variable. In order to understand the difference between the set, setenv and export UNIX commands, the user should know the difference between a normal variable and an environment variable.

Let us  consider an example. In k-shell or bourne shell, a variable is defined as shown below:

# FILE=”output.txt”

This means the variable FILE is assigned a value ‘output.txt’. This value can be checked by doing “echo $FILE”.  This FILE variable is a normal or local variable.  This assignment makes the scope of this variable  only inside the shell in which it is defined.  Any shell or a process invoked from the original shell will not have the variable FILE defined as shown below.


#echo $FILE



#echo $FILE


There are instances or situations where we would like to define a variable and it should be accessed in all the shells or processes invoked by the original shell. This can be achieved by the export command in ksh/sh as shown below.

#export FILE=”output.txt”

#echo $FILE



#echo $FILE



This FILE variable is now an environment variable. An environment variable is the one which can be accessed across all the shells or processes initiated from the original shell of the environment. So, in ksh/sh, a variable can be made an environment variable using the export command.

set and setenv are the c-shell/tc-shell alternatives for setting a local variable and environment variable respectively. The set command is used for setting local variable, setenv is uesd for setting an environment variable:

The example below shows the set command usage:

#set  FILE=”output.txt”

#echo $FILE



#echo $FILE


The example below shows the setenv command usage:

#setenv  FILE ”output.txt”

#echo $FILE



#echo $FILE



Why we set kernel parameters?

what is the use of kernel parameters?

Kernel parameters are used to configure the operating system. Typically, when configuring for Oracle this involves adjustment for memory on the machine and how it gets allocated.


In order to install Oracle , we change kernel parameters on OS to tell OS to release/configure some resource like shared memory , Semaphore. To know what is shared memory, semaphore etc


Oracle, just like any other Operating System Process requires resources. It is bound to the system limits impossed by the kernel parameters. Parameters to regulate the amount of shared memory, the amount of semaphores are important for Oracle at runtime. If not properly set oracle simply stops working or do in erratically.

The list of kernel parameters to be set can be found at the installation guide of the specific platform.


OS kernel parameters

Oracle’s OS specific installation instructions provide guidelines for the OS configuration, but the settings for the OS parameters can make an enormous difference in Oracle performance.

Because Oracle runs on over 60 different operating systems from a mainframe to a Macintosh, it is impossible to cover every single platform.  However, the common configuration issues for UNIX and Microsoft Windows platforms will be presented.

Server Settings for Windows Servers

Windows servers for Oracle are relatively simple when compared to UNIX-based servers.  There are only a few major points to cover to ensure that the Windows server is optimized for an Oracle database.  The larger Windows servers (e.g. the UNISYS ES7000 servers) can have up to 32 CPUs and hundreds of gigabytes of RAM.  They can support dozens of Oracle instances, but many third party applications can hog server resources, causing Oracle performance issues.

Kernel setting for UNIX and Linux servers

In UNIX and Linux, there is much more flexibility in configuration and hundreds of kernel setting that can benefit database performance.  Table 14.1 lists some of the most common kernel parameters that influence Oracle:

Parameter Name Description Default Value Set By the DBA
shmmax The maximum size, in bytes, of a single shared memory segment. For best performance, it should be large enough to hold the entire SGA. 1048576 YES
shmmin The minimum size, in bytes, of a single shared memory segment. 1 YES
shmseg The maximum number of shared memory segments that can be attached (i.e. used) by a single process. 6 YES
shmmni This determines how many shared memory segments can be on the system. 100 YES
shmmns The amount of shared memory that can be allocated system-wide. N/A NO

                                          OS Parameters

For details, the OS specific Oracle installation guide should be consulted for details.  One of the most common problems with Oracle server configuration is sub-optimal I/O.  For example, the most important thing with Linux is enabling direct I/O on the underlying file system. Without that being enabled, Linux will cache files both in the system buffer cache and in SGA. That double caching is unnecessary and will deprive the server of RAM resources.  The following section provides a closer look by outlining some of the important Oracle parameters for performance.

Swap Space

The amount of swap space available to the system helps to increase both the number of processes and the amount of memory they can acquire without exhausting the system’s RAM.  Remember that although swap is slower than RAM, the system is intelligent enough to move data which is less likely to be needed to swap, thus freeing up more RAM for data in higher demand.  In this way, adding swap, a slower resource, can increase performance.

If you find your current swap allocation is lower than the Oracle recommendation, you need to increase the size of the swap partition or create a new partition on a different disk.


Semaphores act as flags for shared memory. Semaphores are either set on or off. When an Oracle process accesses the SGA in shared memory, it checks for a semaphore for that portion of memory. If it finds a semaphore set on for that portion of memory, indicating another process is already using that portion, the process will sleep and check again later. If there is no semaphore set on for that portion of memory, it sets one on and proceeds with its operation.  When it is done, it switches that semaphore back to off.

Oracle specifies semaphore values for semmsl, semmns, semopm and semmni as 250, 3200, 100 and 128, respectively.  These can be found in the output of the sysctl command in this same order.

# /sbin/sysctl -a | grep sem

kernel.sem = 250 32000     32   128

The values for the semaphores represent the following:

  • semmsl:  The number of semaphores per set
  • semmns:  The total number of semaphores available
  • semopm:  The number of operations which can be made per semaphore call
  • semmni:  The maximum number of shared memory segments available in the system

The Oracle recommended values is a good starting point for these parameters, but when running multiple Oracle databases on a system, semmsl and semmns may need to be increased to accommodate the additional instances.

To change this setting, edit the /etc/sysctl.conf file.  If there is already a line for kernel.sem, edit the values given; otherwise, add a line in the same format as the output above.  The line should look like this:

kernel.sem = 250 32000 100 128

This line can be added anywhere in the file, but it is best to keep all the changes in one place within the file.  Comments can be added by starting a line with a # character.

Shared Memory Settings

The parameters shmall, shmmax, and shmmni determine how much shared memory is available for Oracle to use.  These parameters are set in memory pages, not in bytes, so the usable sizes are the value multiplied by the page size, typically 4096 bytes.  To confirm the page size, use the command getconf -a | grep PAGE_SIZE.

  • shmall:  The total amount of shared memory (in pages) which can be allocated on the system
  • shmmax:  The maximum size of a shared memory segment (in pages)
  • shmmni:  The maximum number of shared memory segments available on the system

Given that the SGA for a database must be kept in shared memory, the value of shmmax needs to be as big as the largest SGA.  The value for shmall needs to be bigger than the sum of all your databases.  The value for shmmni needs to be at least as high as the number of databases that are intended to be put on the system, but in practice is generally much higher (Oracle recommends 4096.)

The quick install guide includes directions on checking these parameters.  If they need to be modified, they can be set in the /etc/sysctl.conf file with entries like the following:

kernel.shmall = 2097152

kernel.shmmax = 2147483648

kernel.shmmni = 4096

Keep in mind that shmall and shmmax are set in 4 KB pages, not in bytes.

Other System Settings

A few additional settings are needed for Oracle.  These are not directly related to the system memory, but are included in this section for completeness.

  • fs.file-max:  Controls the total number of files which can be opened at once
  • ip_local_port_range:  This controls the number of network connections which can be allocated at once.  These are unrelated to the network ports on which incoming connections are made
  • rmem_default and rmem_max:  Represent the default and maximum size of the network receive buffer
  • wmem_default and wmem_max:  Represent the default and maximum size of the network send buffer

Under most circumstances, the Oracle recommended values should be used for these parameters.  These can be set in the /etc/sysctl.conf file.

fs.file-max = 6815744

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

While many of these settings can be changed dynamically, it is best to set them in the /etc/sysctl.conf file and restart.  After restart, confirm that the settings are working.

Btrfs File System

Hi guys,

remember my post

this article is about the technology behind that!!!

The btrfs file system is designed to meet the expanding scalability requirements of large storage subsystems. As the btrfs file system uses B-trees in its implementation, its name derives from the name of those data structures, although it is not a true acronym. A B-tree is a tree-like data structure that enables file systems and databases to efficiently access and update large blocks of data no matter how large the tree grows.The btrfs file system provides the following important features:

• Copy-on-write functionality allows you to create both readable and writable snapshots, and to roll back a file system to a previous state, even after you have converted it from an ext3 or ext4 file system.

• Checksum functionality ensures data integrity.

• Transparent compression saves disk space.

• Transparent defragmentation improves performance.

• Integrated logical volume management allows you to implement RAID 0, RAID 1, or RAID 10 configurations, and to dynamically add and remove storage capacity.

You can find more information here :-

Below is some real time requirement , we can understand the need of such technology

According to


I recently had a customer who wanted to reduce storage costs.  They had a 13TB PROD database, and needed 40 copies of the database to support DEV and TEST activities, totalling over 500TB.

We did some diagnostics on the databases, and worked out that only about 10% of data blocks in a clone were ever updated, so deduplicating at filesystem or storage block level could result in 90% saving.  The client was a large conservative bank, so the preference was to use existing products from large vendors.  We evaluated products from Delphix, NetApp SnapManager for Oracle, and Oracle ZFS appliance, and ended up choosing the NetApp product due to a combination of functionality, cost, and in-house skills.
Since then I have been wondering if we could have done it cheaper, and have done a quick evaluation of the BTRFS filesystem which does allow deduplication at filesystem level.
BTRFS is a “copy on write” filesystem and is listed as EXPERIMENTAL, so I don’t think it is ready to use in a PROD environment, but is worth testing to support DEV and TEST environments depending on the cost of downtime.

some linux commands

It would be helpful to go through below links to understand linux commands (written by arup nanda)

File Types:

When you see a file, how do you know what type of file it is? The command file tells you that. For instance:

# file alert_DBA102.log
alert_DBA102.log: ASCII text

The file alert_DBA102.log is an ASCII text file. Let’s see some more examples:

# file initTESTAUX.ora.Z
initTESTAUX.ora.Z: compress'd data 16 bits

This tells you that the file is a compressed file, but how do you know the type of the file was compressed? One option is to uncompress it and run file against it; but that would make it virtually impossible. A cleaner option is to use the parameter -z:

# file -z initTESTAUX.ora.Z
initTESTAUX.ora.Z: ASCII text (compress'd data 16 bits)

Another quirk is the presence of symbolic links:

# file spfile+ASM.ora.ORIGINAL   
spfile+ASM.ora.ORIGINAL: symbolic link to 

This is useful; but what type of file is that is being pointed to? Instead of running file again, you can use the option -l:

# file -L spfile+ASM.ora.ORIGINAL
spfile+ASM.ora.ORIGINAL: data

This clearly shows that the file is a data file. Note that the spfile is a binary one, as opposed to init.ora; so the file shows up as data file.

Tip for Oracle Users

Suppose you are looking for a trace file in the user dump destination directory but are unsure if the file is located on another directory and merely exists here as a symbolic link, or if someone has compressed the file (or even renamed it). There is one thing you know: it’s definitely an ascii file. Here is what you can do:

file -Lz * | grep ASCII | cut -d":" -f1 | xargs ls -ltr

This command checks the ASCII files, even if they are compressed, and lists them in chronological order.

Comparing Files

How do you find out if two files—file1 and file2—are identical? There are several ways and each approach has its own appeal.

diff. The simplest command is diff, which shows the difference between two files. Here are the contents of two files:

# cat file1
In file1 only
In file1 and file2
# cat file2
In file1 and file2
In file2 only

If you use the diff command, you will be able to see the difference between the files as shown below:

# diff file1 file2
< In file1 only 2a2 > In file2 only

In the output, a “<” in the first column indicates that the line exists on the file mentioned first,—that is, file1. A “>” in that place indicates that the line exists on the second file (file2). The characters 1d0 in the first line of the output shows what must be done in sed to operate on the file file1 to make it same as file2.

Another option, -y, shows the same output, but side by side:

# diff -y file1 file2 -W 120
In file1 only                             < In file1 and file2                             In file1 and file2                                           >    In file2 only

The -W option is optional; it merely instructs the command to use a 120-character wide screen, useful for files with long lines.

If you just want to just know if the files differ, not necessarily how, you can use the -q option.

# diff -q file3 file4
# diff -q file3 file2
Files file3 and file2 differ

Files file3 and file4 are the same so there is no output; in the other case, the fact that the files differ is reported.

If you are writing a shell script, it might be useful to produce the output in such a manner that it can be parsed. The -u option does that:

# diff -u file1 file2        
--- file1       2006-08-04 08:29:37.000000000 -0400
+++ file2       2006-08-04 08:29:42.000000000 -0400
@@ -1,2 +1,2 @@
-In file1 only
 In file1 and file2
+In file2 only

The output shows contents of both files but suppresses duplicates, the + and – signs in the first column indicates the lines in the files. No character in the first column indicates presence in both files.

The command considers whitespace into consideration. If you want to ignore whitespace, use the -b option. Use the -B option to ignore blank lines. Finally, use -i to ignore case.

The diff command can also be applied to directories. The command

diff dir1 dir2

shows the files present in either directories; whether files are present on one of the directories or both. If it finds a subdirectory in the same name, it does not go down to see if any individual files differ. Here is an example:

# diff DBA102 PROPRD     
Common subdirectories: DBA102/adump and PROPRD/adump
Only in DBA102: afiedt.buf
Only in PROPRD: archive
Common subdirectories: DBA102/bdump and PROPRD/bdump
Common subdirectories: DBA102/cdump and PROPRD/cdump
Only in PROPRD: CreateDBCatalog.log
Only in PROPRD: CreateDBCatalog.sql
Only in PROPRD: CreateDBFiles.log
Only in PROPRD: CreateDBFiles.sql
Only in PROPRD: CreateDB.log
Only in PROPRD: CreateDB.sql
Only in DBA102: dpdump
Only in PROPRD: emRepository.sql
Only in PROPRD: init.ora
Only in PROPRD: JServer.sql
Only in PROPRD: log
Only in DBA102: oradata
Only in DBA102: pfile
Only in PROPRD: postDBCreation.sql
Only in PROPRD:
Common subdirectories: DBA102/scripts and PROPRD/scripts
Only in PROPRD: sqlPlusHelp.log
Common subdirectories: DBA102/udump and PROPRD/udump

Note that the common subdirectories are simply reported as such but no comparison is made. If you want to drill down even further and compare files under those subdirectories, you should use the following command:

diff -r dir1 dir2

This command recursively goes into each subdirectory to compare the files and reports the difference between the files of the same names.

Tip for Oracle Users

One common use of diff is to differentiate between different init.ora files. As a best practice, I always copy the file to a new name—e.g. initDBA102.ora to initDBA102.080306.ora (to indicate August 3,2006)—before making a change. A simple diff between all versions of the file tells quickly what changed and when.

This is a pretty powerful command to manage your Oracle home. As a best practice, I never update an Oracle Home when applying patches. For instance, suppose the current Oracle version is The ORACLE_HOME could be /u01/app/oracle/product/10.2/db1. When the time comes to patch it to, I don’t patch this Oracle Home. Instead, I start a fresh installation on /u01/app/oracle/product/10.2/db2 and then patch that home. Once it’s ready, I use the following:

# sqlplus / as sysdba
SQL> shutdown immediate
SQL> exit
# export ORACLE_HOME=/u01/app/oracle/product/10.2/db2
# export PATH=$ORACLE_HOME/bin:$PATH
# sqlplus / as sysdba
SQL> @$ORACLE_HOME/rdbms/admin/catalog

and so on.

The purpose of this approach is that the original Oracle Home is not disturbed and I can easily fall back in case of problems. This also means the database is down and up again, pretty much immediately. If I installed the patch directly on the Oracle Home, I would have had to shut the database for a long time—for the entire duration of the patch application. In addition, if the patch application had failed due to any reason, I would not have a clean Oracle Home.

Now that I have several Oracle Homes, how can I see what changed? It’s really simple; I can use:

diff -r /u01/app/oracle/product/10.2/db1 /u01/app/oracle/product/10.2/db2 | 
grep -v Common

This tells me the differences between the two Oracle Homes and the differences between the files of the same name. Some important files like tnsnames.ora, listener.ora, and sqlnet.ora should not show wide differences, but if they do, then I need to understand why.

cmp. The command cmp is similar to diff:

# cmp file1 file2   
file1 file2 differ: byte 10, line 1

The output comes back as the first sign of difference. You can use this to identify where the files might be different. Like diff, cmp has a lot of options, the most important being the -s option, that merely returns a code:

  • 0, if the files are identical
  • 1, if they differ
  • Some other non-zero number, if the comparison couldn’t be made

Here is an example:

# cmp -s file3 file4
# echo $?

The special variable $? indicates the return code from the last executed command. In this case it’s 0, meaning the files file1 and file2 are identical.

# cmp -s file1 file2
# echo $?

means file1 and file2 are not the same.

This property of cmp can prove very useful in shell scripting where you merely want to check if two files differ in any way, but not necessarily check what the difference is. Another important use of this command is to compare binary files, where diff may not be reliable.

Tip for Oracle Users

Recall from a previous tip that when you relink Oracle executables, the older version is kept prior to being overwritten. So, when you relink, the executable sqlplus is renamed to “sqlplusO” and the newly compiled sqlplus is placed in the $ORACLE_HOME/bin. So how do you ensure that the sqlplus that was just created is any different? Just use:

# cmp sqlplus sqlplusO
sqlplus sqlplusO differ: byte 657, line 7

If you check the size:

# ls -l sqlplus*
-rwxr-x--x    1 oracle   dba          8851 Aug  4 05:15 sqlplus
-rwxr-x--x    1 oracle   dba          8851 Nov  2  2005 sqlplusO

Even though the size is the same in both cases, cmp proved that the two programs differ.

comm. The command comm is similar to the others but the output comes in three columns, separated by tabs. Here is an example:

# comm file1 file2
        In file1 and file2
In file1 only
In file1 and file2
        In file2 only

This command is useful when you may want to see the contents of a file not in the other, not just a difference—sort of a MINUS utility in SQL language. The option -1suppresses the contents found in first file:

# comm -1 file1 file2
In file1 and file2
In file2 only

Summary of Commands in This Installment

Command Use
chmod To change permissions of a file, using the – -reference parameter
chown To change owner of a file, using the – -reference parameter
chgrp To change group of a file, using the – -reference parameter
stat To find out about the extended attributes of a file, such as date last accessed
file To find out about the type of file, such ASCII, data, and so on
diff To see the difference between two files
cmp To compare two files
comm To see what’s common between two files, with the output in three columns
md5sum To calculate the MD5 hash value of files, used to determine if a file has changed

How we can delete the files which are few days(N days) old?
Ans:To save the disk space you might be deleting the old files or backup which are 1 week(2 weeks) old or depending on your Disk Space and other requirement.We should automate these tasks as a DBA.We can do this as follows:

For Unix environment:

Eg: If I want to delete files from a path which are 7 Days old:
Write one shell script as given below:
#Removing 7 days old dump files
find /u03/DB_BACKUP_TESTDB/expdp_fulldb_backup -mtime +6 -exec rm {} \;

Where: find =>Finding the file; /u03/DB_BACKUP_TESTDB/expdp_fulldb_backup =>path;
-mtime=>Modified time,Here I’m giving 7 days(>6);-exec rm =>execute removing for files. Now as per your convenience schedule cronjob for doing this task,For example every sunday at 9 pm than:

00 21 * * 0 /u03/DB_BACKUP_TESTDB/expdp_fulldb_backup/ 2>&1 >/u05/DB_BACKUP_TESTDB/logs/CRONJOBS_LOGS/TESTDB_BACK_cron.log

Which command can be used to view file in readonly mode?

Ans:’view’ command can be used to view file in readonly mode.A very good option to see cronjob file specially because at any this file should not get modified by mistake as all your daily jobs will be scheduled using cronjob.
Eg: view

Which command is useful to see line by line display in Unix environment?
Ans: ‘less’ command is used to see line-by-line display(‘more’ for page-by-page display).I find less more useful specially for seeing log files and to find the errors or Warnings in the log files.

Which command is used to copy or synchronizing two Directories in a secure way on any Unix environment?
Ans : ‘rsync’ command can be used to copy or synchronize two directories.It is very important command and very handy tool for copying files fast for a DBA,Ofcourse
‘scp’ can also be use.But I really like using ‘rsync’ .Below is the example

Copy file from a local computer to a remote server:

Copy file from /www/backup.tar.gz to a remote server called
$ rsync -v -e ssh /www/backup.tar.gz

Copy file from a local computer to a remote server:

  • Speed: First time, rsync replicates the whole content between the source and destination directories. Next time, rsync transfers only the changed blocks or bytes to the destination location, which makes the transfer really fast.
  • Security: rsync allows encryption of data using ssh protocol during transfer.
  • Less Bandwidth: rsync uses compression and decompression of data block by block at the sending and receiving end respectively. So the bandwidth used by rsync will be always less compared to other file transfer protocols.
  • Privileges: No special privileges are required to install and execute rsync

Q. Which command is used to schedule job without user intervention and backend?

Ans: Most of the time you login into remote server via ssh. If you start a shell script or command and you exit (abort remote connection), the process / command will get killed. Sometime job or command takes a long time. If you are not sure when the job will finish, then it is better to leave job running in background. But, if you log out of the system, the job will be stopped and terminated by your shell. What do you do to keep job running in the background when process gets SIGHUP?

Say hello to nohup command

The answer is simple, use nohup command line-utility which allows to run command/process or shell script that can continue running in the background after you log out from a shell:

nohup command syntax:
The syntax is as follows
nohup command-name &

nohup /path/to/command-name arg1 arg2 &


command-name : is name of shell script or command name. You can pass argument to command or a shell script.
& : nohup does not automatically put the command it runs in the background; you must do that explicitly, by ending the command line with an & symbol.
Use jobs -l command to list all jobs:

jobs -l

nohup command examples

First, login to remote server using ssh command:
$ ssh

$ ssh

I am going to execute a shell script called

nohup &

Type exit or press CTRL + D exit from remote server:


In this example, I am going to find all programs and scripts with setuid bit set on, enter:

nohup find / -xdev -type f -perm +u=s -print > out.txt &

Type exit or press CTRL + D exit from remote server.