Wednesday, March 20, 2019

Automated Disk Replacement Request

=== Automated Disk Replacement Request ===This SR Creation method uses automated diagnostics to expedite the replacement of failed drives.
Diagnostics will attempt to identify a failed disk using the controller output file attached to the SR.
The automated process is interrupted by any SR updates, and that may delay processing.

* Note: this is an automated request.
If you have any queries that will require an engineer response, please update the SR with a note AFTER the SR has been updated with a proposed solution (do not use this template).
Automation will recognize your update and route the SR to an engineer.

Please run the appropriate command based on your controller type and enter below, or upload using the attach button on the More Details page.

For 12Gbps SAS3 Internal RAID HBA use command: storcli64 -PDList -aALL
Note: Earlier versions of the utility may use the MegaCli64 command: MegaCli64 -PDList -aALL

For SGX-SAS6-INT-Z use command: sas2ircu 0 display

For SGX-SAS6-R-INT-Z use command: MegaCli64 -PDList -aALL
Note: Some versions of the utility may use the MegaCli command: MegaCli -PDList -aALL

For RAID 5 Expansion Module (X4620A) use command: arcconf getconfig 1

For more details on internal disk controllers, please see: What Internal Hardware RAID controllers do Sun/Oracle X86 Systems Use?
http://support.oracle.com/rs?type=doc&id=1564893.1

Downloads:
Download MegaCli from: Oracle Downloads SGX-SAS6-R-INT-Z
https://www.broadcom.com/support/oem/oracle/6gb/sg_x_sas6-r-int-z

Download sas2ircu from: Oracle Downloads SGX-SAS6-INT-Z
https://www.broadcom.com/support/oem/oracle/6gb/sg_x_sas6-int-z

Download storcli from: Oracle Storage 12 Gb/s SAS PCIe RAID HBA, Internal
https://www.broadcom.com/support/oem/oracle/12gb/sas-12gbs-pcie-raid-hba-internal

Download arcconf from: Downloads for Sun StorageTek* SAS RAID HBA, Internal
https://downloadcenter.intel.com/product/50583

Glovia Server YUM Installed Packages

Glovia Server YUM Installed Packages

  748  yum install eclipse
  755  yum install gtop
  756  yum install stui
  757  yum list installed | grep nodejs
  758  yum list installed | grep epel-release
  759  yum install nodejs
  775  yum install gtop
  777  yum install npm
  779  cd yum.repos.d
  784  yum list installed | grep epel-release
  785  yum list installed | grep modejs
  786  yum install hodejs
  787  yum install nodejs
  788  yum list installed | grep nodejs
  790  yum install gtop
  793  yum list installed | grep npm
  794  yum install npm
  809  yum list installed | grep ntfs
  810  yum install ntfs-3g
  813  cd yum.repos.d
  815  diff public-yum-ol6.repo public-yum-ol6.repo.20170830
  816  vi public-yum-ol6.repo.20170830
  818  scp public-yum-ol6.repo bliu@10.21.8.143:/tmp
  820  history | grep yum install
  821  yum list installed | grep ntfs
  822  yum install ntfs-3g
  825  cd yum.repos.d
  827  diff public-yum-ol6.repo public-yum-ol6.repo.20170830
  828  vi public-yum-ol6.repo.20170830
  830  scp public-yum-ol6.repo bliu@10.21.8.143:/tmp
  832  history | grep yum install
  833  history | grep yum
  836  yum list installed | grep ntfs
  844  yum list installed | grep nagios
  845  yum list installed | grep openssl
  847  yum install nrpe nagios-plugins-all
  862  yum install lynx
  876  yum install finger
  883  yum install xclock
  945  yum list installed | grep up2date
  976  yum list installed | grep ksplice

Tuesday, March 19, 2019

MOUNT KEIHIN ZEUS DRIVES FRON ALPHA

MOUNT KEIHIN ZEUS DRIVES FRON ALPHA

# umount /fra/mfgblob_archives
# umount /fra/grid
# umount /fra/mfgblob_exports
# mount -o rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 -t nfs zeus.net.com:/oracle/mfgblob_archives /fra/mfgblob_archives
#
# mount -o rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 -t nfs
zeus.net.com:/oracle_backup /fra/grid
#
# mount -o rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 -t nfs
zeus.net.com:/oracle/mfgblob_exports /fra/mfgblob_exports
#

Friday, March 15, 2019

CentOS / RHEL : How to delete LVM volume

CentOS / RHEL : How to delete LVM volume

If the non-root LVM volume, Volume Group, and Physical Volume used for the LV are no longer required on the system, then it could be removed/deleted using following steps. If the LVM volume is containing any required data, then please make sure to take a backup of that data before proceeding with following steps:
In this example, we will be deleting “testlv” from the volume group “datavg”. The LV is mounted on the mount point /data01.

# df -hP | grep -i data01
/dev/mapper/datavg-testlv  976M  2.6M  907M   1% /data01

# lvs
  LV     VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root   centos -wi-ao---- 17.47g
  swap   centos -wi-ao----  2.00g
  testlv datavg -wi-ao----  1.00g

1. Delete the entry of the mount point from the /etc/fstab :
# cat /etc/fstab
...
/dev/mapper/datavg-testlv            /data01              ext4    defaults        0 0
...

2. Unmount the mount point :
# umount /data01

3. Disable lvm :
# lvchange -an /dev/datavg/testlv

4. Delete lvm volume :
# lvremove /dev/datavg/testlv

5. Disable volume group :
# vgchange -an datavg

6. Delete volume group :
# vgremove datavg

7. Delete physical Volumes being used for the volume group “datavg” :
# pvremove /dev/sdb  /dev/sdc

  How to Change Instance Type & Security Group of EC2 in AWS By David Taylor Updated April 29, 2023 EC2 stands for Elastic Compute Cloud...