Tuesday, May 10, 2016

Oracle VM 3.x: Convert Oracle Linux 6.x PVM guest to a HVM guest (Doc ID 1917613.1)

Oracle VM 3.x: Convert Oracle Linux 6.x PVM guest to a HVM guest (Doc ID 1917613.1)

APPLIES TO:

Oracle VM - Version 3.0.1 and later
x86_64

GOAL

This document descript how to convert an installed PVM guest to a HVM guest.

An Oracle Linux 6.x PVM guest has PV drivers loaded:
[root@ol6vm ~]# lsmod | grep xen
xen_netfront           19533  0
xen_blkfront           17065  4

SOLUTION

1. Append kernel boot argument "xen_emul_unplug=never" in /boot/grub/grub.conf.
Note: This argument is valid under both UEK and RHCK kernels.
Before modification:
[root@ol6vm ~]# grep -v "#" /boot/grub/grub.conf
default=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Oracle Linux Server (2.6.32-279.9.1.el6.x86_64)
    root (hd0,0)
    kernel /vmlinuz-2.6.32-279.9.1.el6.x86_64 ro root=LABEL=/ SYSFONT=latarcyrheb-sun16 LANG=en_US.UTF-8 KEYTABLE=us numa=off crashkernel=auto
    initrd /initramfs-2.6.32-279.9.1.el6.x86_64.img
title Oracle Linux Server (2.6.39-200.32.1.el6uek.x86_64)
    root (hd0,0)
    kernel /vmlinuz-2.6.39-200.32.1.el6uek.x86_64 ro root=LABEL=/ SYSFONT=latarcyrheb-sun16 LANG=en_US.UTF-8 KEYTABLE=us numa=off
    initrd /initramfs-2.6.39-200.32.1.el6uek.x86_64.img
After modification (Blod for highlights):
[root@ol6vm ~]# grep -v "#" /boot/grub/grub.conf
default=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Oracle Linux Server (2.6.32-279.9.1.el6.x86_64) HVM
    root (hd0,0)
    kernel /vmlinuz-2.6.32-279.9.1.el6.x86_64 ro root=LABEL=/ SYSFONT=latarcyrheb-sun16 LANG=en_US.UTF-8 KEYTABLE=us numa=off crashkernel=autoxen_emul_unplug=never
    initrd /initramfs-2.6.32-279.9.1.el6.x86_64.img
title Oracle Linux Server (2.6.39-200.32.1.el6uek.x86_64) HVM
    root (hd0,0)
    kernel /vmlinuz-2.6.39-200.32.1.el6uek.x86_64 ro root=LABEL=/ SYSFONT=latarcyrheb-sun16 LANG=en_US.UTF-8 KEYTABLE=us numa=offxen_emul_unplug=never
    initrd /initramfs-2.6.39-200.32.1.el6uek.x86_64.img

2. Shutdown VM guest, update VM guest properties via Oracle VM Manager BUI: Domain Type: Xen PVM to Xen HVM, save changes.

3. Start VM guest and confirm conversion.

i. Kernel argument "xen_emul_unplug=never" is correctly loaded:
[root@ol6vm ~]# cat /proc/cmdline
ro root=LABEL=/ SYSFONT=latarcyrheb-sun16 LANG=en_US.UTF-8 KEYTABLE=us numa=off xen_emul_unplug=never

ii. No "xen_netfront" or "xen_blkfront" loaded:
[root@ol6vm ~]# lsmod | grep xen
[root@ol6vm ~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 01)
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:03.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
00:04.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 20)

iii. DMI table decoder output indicates a HVM domU:
[root@ol6vm ~]# dmidecode | grep -A4 "System Information"
System Information
    Manufacturer: Xen
    Product Name: HVM domU
    Version: 4.3.1OVM
    Serial Number: 0004fb00-0006-0000-fb9c-fbc14b9f6b58

REFERENCES

NOTE:1606398.1 - Oracle VM 3.x: Oracle Linux 6 guest VM conversion from HVM to PVM
NOTE:1664681.1 - How to Initiate a PVM Guest OS Installation in Oracle VM Release 3

How To Boot Linux Paravirtualized VM Into Rescue Mode and in Single User Mode (Doc ID 1602157.1)

How To Boot Linux Paravirtualized VM Into Rescue Mode and in Single User Mode (Doc ID 1602157.1)

APPLIES TO:

Oracle VM - Version 3.0.3 to 3.2.6 [Release OVM30 to OVM32]
Information in this document applies to any platform.

GOAL

This article describes how to configure an Oracle VM guest Virtual Machine (VM) to boot into linux rescue mode.

SOLUTION

If, for some reason, Oracle VM guest fails to start, it may be needed to boot it into rescue mode or into single user mode to be able to diagnose and recover the system.

For physical installations, this is simply a matter of inserting the installation media (e.g. DVD/CD-ROM Disk 1), booting the system, then typing 'linux rescue' at the installation prompt, and for single user mode you just need to interupt the kernel at grub prompt and edit it for single user mode.

For Virtual Machines, however, the process is somewhat different. In fact, the process differs depending on which virtualization type the guest VM is configured to use i.e. Paravirtualized (PVM) or Hardware (Fully) Virtualized (HVM). Here we will see how we can boot PVM guests into rescue mode and in single user mode.
But before we look into below method, I would like to share the most easiest method on how we can repair the vdisks for PVM guests, and for this the PVM guest in questioned needs to be shutdown, after which you can detach the vdisk and attach that vdisk to another guest with the same OS release that is functioning properly, and repair it using the tools installed in the other guest.
Paravirtualized (PVM) Guests

1. Rescue Mode

1.1 Locate the Guest VM Installation Media

Locate the installation media for the guest VM. If the installation media was imported via Oracle VM Manager, the media should reside under the /OVS/Repositories/<uuid>/ISOs directory on the Oracle VM Server(s).

1.2. Loop Mount the Guest VM Installation Media

Loop mount the guest installation media on the Oracle VM Server.
In the following example, the 64-bit EL5 DVD ISO media (Enterprise-R5-U8-Server-x86_64-dvd.iso) is used.
# mkdir /media/OL5u8
# mount -o ro,loop /OVS/Repositories/0004fb00000300003a8564212a807247/ISOs /media/OL5u8


1.3. Copy the Initial RAM Disk and Kernel from Guest Installation Media

Copy the initial ram disk (initrd.img) and kernel image (vmlinuz) from the loop-mounted guest installation media to a temporary directory on the Oracle VM Server, for example:
# cp /media/OL5u8/images/xen/initrd.img /media/OL5u8/images/xen/vmlinuz /tmp


1.4. Modify the Guest VM Configuration File

For Oracle VM Manager managed guests, the guest VM configuration file (vm.cfg) typically resides under the /OVS/Repositories/<uuid>/VirtualMachines/<VM UUID>directory, for example:/OVS/Repositories/0004fb00000300003a8564212a807247/VirtualMachines/0004fb0000060000837b90fd1f033b81/vm.cfg
Backup the existing guest VM configuration file, then modify it as follows:
Comment out the following line:
bootloader = "/usr/bin/pygrub"

Add the following lines:
kernel  = "/tmp/vmlinuz"
ramdisk = "/tmp/initrd.img"
extra   = "rescue"

Note: The above lines should be added below to "bootloader" section which you have commented out above.

1.5. Start the Guest VM

Using the 'xm' command, start the guest VM, for example:
# xm create /OVS/Repositories/0004fb00000300003a8564212a807247/VirtualMachines/0004fb0000060000837b90fd1f033b81/vm.cfg


1.6. Launch the VNC console for the guest VM

Now launch the VNC console using the OVM manager for the required guest VM.

2. Alternate Ways:

1. Since OVM 3.x can specify the network location which a vm uses as a boot device, such as a nfs export, etc., there is no need to copy initrd.img and vmlinuz from the iso media to launch the resucue mode., but we can just use network location.

a. Configure to boot up from network via ovm manager
b. Modify vm.cfg to add resucue for extra arguments, i.e. extra='rescue'
c. Power on the vm via ovm manager.
 or the best way is below:
2. Go to OVM manager GUI, and stop the guest VM (which you want to boot into rescue mode) and perform below steps:
a. Stop the VM
b. Edit the VM and go to Disks section and take the screen shot for all the available disks if it has more than 4 virtual or physical disks attached.
c. Go to Configuration tab and select the Domain Type to XEN_HVM
d. Go to Disks section again and remove the additional disk if it has more than three disks and select CD/DVD option as fourth disk. After which select the required ISO, and go to Boot Order tab and select the Boot Order as CDROM and choose CDROM as first option to boot and click OK.
e. Now, go ahead and start the VM and this should help you provide the option to boot into rescue mode like you do for physical servers.
f. Once your rescue mode part is done then you can again stop the VM and edit it back to use XEN_PVM domain and add back the existing disks and change the Boot Order if necessary and click ok and boot up the VM back in PVM mode.

3. Single User Mode

For Single User Mode, it is just needed to edit the vm.cfg file and add the below line just after bootloader="/usr/bin/pygrub"
extra = 'S'
Start the guest VM and take the VNC console. Once resolved, shutdown the VM and edit vm.cfg file and remove the parameter extra = "S", then save it, after that it is possible to boot again into the predefined runlevel.

REFERENCES

NOTE:549410.1 - Oracle VM: How to Configure a Guest VM to Boot into Linux Rescue Mode

Wednesday, May 4, 2016

Oracle VM: How to Move "poolfs/ repository" to a New Storage (Doc ID 1683899.1)

Oracle VM: How to Move "poolfs/ repository" to a New Storage (Doc ID 1683899.1)

GOAL

This document describes about the procedure to move poolfs and repository to new storage and how to decommission the existing pool. In this scenario, we will be covering the approaches for FC/ iSCSI.

SOLUTION

FC Storage

Provision the new FC LUN's on OVM manager GUI using the normal procedure.
1. Find the master server in existing pool on OVM GUI
 Image to check the master server on the pool.
2.  Migrate/ Shutdown the guest machine on any one of the OVS server in pool which is non-master, so that the existing setup won't be disturbed untill the pool is deleted. (Choose only non-master server)
3.  Remove non-master servers from the pool, once the guests are migrated/ shutdown. 
4. Need to have a new VIP for the second pool during migration. The old VIP will  be discarded once the original server pool is deleted. 
5. Create new pool as per the standard procedure and choose the LUN for poolfs from new FC storage and add the deleted server to this new pool. Chances are there for failure in adding the server to pool due to older cluster entires, so please refer this document 1635060.1 if in-case stuck with this error "Exception:Already in cluster".
6. Except master server, all other OVS server should have been removed from the older pool.
7. If any guest machines are running on the master node, then shutdown the guest.
8. First, unpresent the servers from the older repository and release ownership of the repository. Refer this document  1551877.1 for the detailed procedure to release the ownership.
9. Present the repository to the new pool which is using the new storage for poolfs and repository. Refer this document  1551877.1 for the detailed procedure to present to the new pool.
10. Verify repository informations are available once presented to new pool, if yes, then proceed to delete the older server pool. i.e Place the remaining master server in maintenance  mode and remove from the pool, then delete the server and pool.
11. Discover the server and add the server to the new pool and verify the storage/ repository access is presented to all the servers in new pool.
12. Present the new storage repository to new pool and migrate the content from older repository to new repository. For procedure refer here
13. Start the guest machine.

iSCSI 

The steps for iSCSI remains same as FC storage, but care should be taken in access group of the storage initiator, else the iSCSI disk won't be available for the repositories/ poolfs.
Except for generic storage arrays, it is possible to create multiple access groups in order to arrange and restrict physical disk access according to your requirements. The generic iSCSI storage arrays have a single access group  available by default, where you can simply add or remove storage initiators from your Oracle VM Servers.
 If you have generic iSCSI storage, then please make sure all the servers storage initiator's are there in access group at time of creating poolfs/ repositories.
For more details, refer here.

REFERENCES

NOTE:1521931.1 - VMPinfo3 Diagnostic Tool For Oracle VM 3.2 and 3.3 Troubleshooting
http://docs.oracle.com/cd/E35328_01/E35332/html/vmusg-vm-manage.html#vmusg-vm-move-storage 
NOTE:1635060.1 - Oracle VM - Cannot Move Server to Another Server Pool
NOTE:1551877.1 - How to Present a Storage Repository From One Pool to Another at Oracle VM Manager

Tuesday, May 3, 2016

Creating a virtual RAC using oracle VM templates


Creating a virtual RAC using oracle VM templates


Our latest toy at portrix systems is an oracle VM testing environment, consisting of three sun servers, a 7120 Storage an a lot of enthusiasm to explore this exciting technology as deeply as possible.
A few days ago I challenged myself to install a virtual two node cluster into this environment.
Luckily I did not have to perform the installation from scratch, because oracle offers some preconfigured templates to download and simply import into the OVM environment.
I was most interested in “Oracle Real Application Cluster (RAC) 11g R2” which you will find here.
Unfortunately the first steps of the documentation, though very detailed, did not work in every aspect. It seems to refer to an older release of Oracle VM Manager and some important parts of the descriptions worked a bit differently 3.0.3.
If somebody else maybe faced the same problem, the following description might help.
These have been the two most time consuming issues for me.
– Creating shared disks for ASM:
This wasn’t quite easy because the menu structure has changed in between the documentation and oracle VM 3.0.3. And since I had no experienhow to add a virtual diskce at all with OVM, I had to do a bit of digging before I was able to achieve this part on my own. But once found, the disk creation itself is very simple.
On the left pane, you have to select “Server Pools“, then go to the “repositories” tab on the right, select your repository and finaly select the “Virtual Disk” tab on the bottom right of the page. When creating a new disk, you have to select the “shareable” option to make it work.

– Importing the template:
In VM Manager 3.0.3, oracle has changed the method of importing templates significantly. You can’t just copy the template into a local folder any more, but now have to distribute them using a local webserver. OVM needs a working http-link to import them. Luckily, our 7120 has the option to create shares via http.
But thats not the only problem. The downloaded template consists of two parts: disk one containing the image for the OS, and disk two containing the image for the clusterware files, which come in two separate zip files.
I had to unzip both files and got two tgz, as expected due to the manual. I extracted those as well and ended up with one directory. This directory had again to be packed as a whole, leaving me with one big tgz. And this was the template I finally was able to import in OVM.
> tar xzf OVM_OL5U6_X86_64_11202RAC_PVM-1of2.tgz 
> tar xzf OVM_OL5U6_X86_64_11202RAC_PVM-1of2.tgz
> ls -la
drwxr-xr-x  3 markus markus       4096 2012-01-25 12:55 OVM_OL5U6_X86_64_11202RAC_PVM
-rw-r--r--  1 markus markus  782798631 2011-06-24 13:44 OVM_OL5U6_X86_64_11202RAC_PVM-1of2.tgz
-rw-r--r--  1 markus markus 3515262120 2011-06-24 14:12 OVM_OL5U6_X86_64_11202RAC_PVM-2of2.tgz
> tar -czvf OVM_OL5U6_X86_64_11202RAC_PVM.tgz OVM_OL5U6_X86_64_11202RAC_PVM
> ls -la
drwxr-xr-x  3 markus markus       4096 2012-01-25 12:55 OVM_OL5U6_X86_64_11202RAC_PVM
-rw-r--r--  1 markus markus  782798631 2011-06-24 13:44 OVM_OL5U6_X86_64_11202RAC_PVM-1of2.tgz
-rw-r--r--  1 markus markus 3515262120 2011-06-24 14:12 OVM_OL5U6_X86_64_11202RAC_PVM-2of2.tgz
-rw-r--r--  1 markus markus 4329463066 2012-01-25 14:07 OVM_OL5U6_X86_64_11202RAC_PVM.tgz
Afterwards I noticed, that you can also upload the tgz, without unpacking and packing them again. You just have to upload them both at the same time – I did not know that the first time.
Once done this, the rest was fairly straightforward and I just needed to follow the instrunctions from the documentation. I cloned the template two times, booted both machines, gave them IPs and names and then had a cup of tea for about half an hour.
Thats all.

The result is a working and completely configured two node cluster, perfectly suited for some database and clusterware testing. Only little downside is that this installation does not include a special grid-user. So both, clusterware and database are owned by the oracle user. But for a testing environment that’s okay.
My next task now would be to find as many ways as possible to destroy a virtual RAC-Cluster. I think, I might somehow be able to do that…

Resize linux root disk in Oracle VM




Resize linux root disk in Oracle VM


I just needed to grow my root fs on an Oracle Linux guest VM running in OVM 3.2 and I thought I’d share the process with you. A lot of the steps are taken from a similar blog on doing a similar thing on vmware.
To start with, I have a linux vm which started as a clone of one of the Linux VM templates that Oracle provides. It comes with a 20G disk which is partitioned into two pieces. One for /boot and the other one is lvm physical volume that holds the rest of the system including the root partition.
The basic steps are:
– backup the existing system just in case
– resize virtual disk in OVM
– grow the partition to the new size
– grow the pv device
– grow the logical volume for my root / partition
– grow the filesystem on /
Instead of growing the original physical volume and partition I could also simply add a second disk to the lvm pool but I like to keep things simple and use as few devices as possible.
First you want to backup the VM in case something goes wrong later. If your VM is on an OCFS repository, you can simply clone the running VM. This VM was on an NFS repo so I had to turn it off and create a clone from there (actually I simply made a copy of the virtual disk .img file once the VM was shut down).
resize_OVM_disk
Edit the VM in OVM manager, navigate to the virtual disk and enter a new (bigger) size for the virtual disk. Click apply and this should be taken care of. Now the Linux guest needs to see the change and this can be done either by rebooting the VM or rescanning the devices. Since this is a paravirtualized VM (PVM) simply running ‘sync’ or rescanning the SCSI bus won’t work because the device (xvda) is not really scsi. I have yet to find a way to do this online (and if you know one, please leave a comment). Until then: reboot the VM so that fdisk recognized the new size of the VM. After that, start fdisk, check the new size and re-create the lvm partition to the new maximum size. I am always a bit scared of this step because you are actually deleting the partition. But this simply means removing the entry from the partition table and will actually leave your data intact as long as you re-create the partition with the same start sector.
[root@ovmm32 ~]# fdisk /dev/xvda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): p

Disk /dev/xvda: 34.4 GB, 34359738368 bytes
255 heads, 63 sectors/track, 4177 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00082e48

    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/xvda2              64        2611    20458496   8e  Linux LVM

Command (m for help): d
Partition number (1-4): 2

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (64-4177, default 64): 
Using default value 64
Last cylinder, +cylinders or +size{K,M,G} (64-4177, default 4177): 
Using default value 4177

Command (m for help): p

Disk /dev/xvda: 34.4 GB, 34359738368 bytes
255 heads, 63 sectors/track, 4177 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00082e48

    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/xvda2              64        4177    33038728+  83  Linux

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): L

 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris        
 1  FAT12           39  Plan 9          82  Linux swap / So c1  DRDOS/sec (FAT-
 2  XENIX root      3c  PartitionMagic  83  Linux           c4  DRDOS/sec (FAT-
 3  XENIX usr       40  Venix 80286     84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
 4  FAT16 <32M      41  PPC PReP Boot   85  Linux extended  c7  Syrinx         
 5  Extended        42  SFS             86  NTFS volume set da  Non-FS data    
 6  FAT16           4d  QNX4.x          87  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS       4e  QNX4.x 2nd part 88  Linux plaintext de  Dell Utility   
 8  AIX             4f  QNX4.x 3rd part 8e  Linux LVM       df  BootIt         
 9  AIX bootable    50  OnTrack DM      93  Amoeba          e1  DOS access     
 a  OS/2 Boot Manag 51  OnTrack DM6 Aux 94  Amoeba BBT      e3  DOS R/O        
 b  W95 FAT32       52  CP/M            9f  BSD/OS          e4  SpeedStor      
 c  W95 FAT32 (LBA) 53  OnTrack DM6 Aux a0  IBM Thinkpad hi eb  BeOS fs        
 e  W95 FAT16 (LBA) 54  OnTrackDM6      a5  FreeBSD         ee  GPT            
 f  W95 Ext'd (LBA) 55  EZ-Drive        a6  OpenBSD         ef  EFI (FAT-12/16/
10  OPUS            56  Golden Bow      a7  NeXTSTEP        f0  Linux/PA-RISC b
11  Hidden FAT12    5c  Priam Edisk     a8  Darwin UFS      f1  SpeedStor      
12  Compaq diagnost 61  SpeedStor       a9  NetBSD          f4  SpeedStor      
14  Hidden FAT16 <3 63  GNU HURD or Sys ab  Darwin boot     f2  DOS secondary  
16  Hidden FAT16    64  Novell Netware  af  HFS / HFS+      fb  VMware VMFS    
17  Hidden HPFS/NTF 65  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE 
18  AST SmartSleep  70  DiskSecure Mult b8  BSDI swap       fd  Linux raid auto
1b  Hidden W95 FAT3 75  PC/IX           bb  Boot Wizard hid fe  LANstep        
1c  Hidden W95 FAT3 80  Old Minix       be  Solaris boot    ff  BBT            
1e  Hidden W95 FAT1
Hex code (type L to list codes): 8e
Changed system type of partition 2 to 8e (Linux LVM)
Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
This calls for another reboot which is unfortunate, but one more cycle won’t hurt us now. After that we we can resize the pv:
[root@ovmm32 ~]# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/xvda2
  VG Name               vg_ovmm32
  PV Size               19.51 GiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              4994
  Free PE               0
  Allocated PE          4994
  PV UUID               HPweY3-sK8N-qKRR-57qR-HimH-zPKK-WD4p6e
   
[root@ovmm32 ~]# pvresize /dev/xvda2
  Physical volume "/dev/xvda2" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized
[root@ovmm32 ~]# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/xvda2
  VG Name               vg_ovmm32
  PV Size               31.51 GiB / not usable 3.38 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              8065
  Free PE               3071
  Allocated PE          4994
  PV UUID               HPweY3-sK8N-qKRR-57qR-HimH-zPKK-WD4p6e
[root@ovmm32 ~]# vgdisplay 
  --- Volume group ---
  VG Name               vg_ovmm32
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               31.50 GiB
  PE Size               4.00 MiB
  Total PE              8065
  Alloc PE / Size       4994 / 19.51 GiB
  Free  PE / Size       3071 / 12.00 GiB
  VG UUID               Yi9o86-E9fU-ZbU0-Fvuj-9QQn-n1AJ-N04Hzb
So let’s look at the logical volumes and then resize the root volume to the new maximum size:
[root@ovmm32 ~]# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/vg_ovmm32/lv_root
  LV Name                lv_root
  VG Name                vg_ovmm32
  LV UUID                Z3slmg-uIYv-0cRP-q1ZQ-8kmA-12Ja-WU6FfT
  LV Write Access        read/write
  LV Creation host, time ovmm32.portrix.net, 2013-06-07 14:00:44 +0200
  LV Status              available
  # open                 1
  LV Size                11.70 GiB
  Current LE             2994
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0
   
  --- Logical volume ---
  LV Path                /dev/vg_ovmm32/lv_swap
  LV Name                lv_swap
  VG Name                vg_ovmm32
  LV UUID                6dfA32-ZH6r-Ulbn-ZzUT-uDOh-EwlP-y8E3PC
  LV Write Access        read/write
  LV Creation host, time ovmm32.portrix.net, 2013-06-07 14:01:55 +0200
  LV Status              available
  # open                 2
  LV Size                7.81 GiB
  Current LE             2000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1
   
[root@ovmm32 ~]# lvextend -l +100%FREE /dev/vg_ovmm32/lv_root
  Extending logical volume lv_root to 23.69 GiB
  Logical volume lv_root successfully resized
[root@ovmm32 ~]# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/vg_ovmm32/lv_root
  LV Name                lv_root
  VG Name                vg_ovmm32
  LV UUID                Z3slmg-uIYv-0cRP-q1ZQ-8kmA-12Ja-WU6FfT
  LV Write Access        read/write
  LV Creation host, time ovmm32.portrix.net, 2013-06-07 14:00:44 +0200
  LV Status              available
  # open                 1
  LV Size                23.69 GiB
  Current LE             6065
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0
   
  --- Logical volume ---
  LV Path                /dev/vg_ovmm32/lv_swap
  LV Name                lv_swap
  VG Name                vg_ovmm32
  LV UUID                6dfA32-ZH6r-Ulbn-ZzUT-uDOh-EwlP-y8E3PC
  LV Write Access        read/write
  LV Creation host, time ovmm32.portrix.net, 2013-06-07 14:01:55 +0200
  LV Status              available
  # open                 2
  LV Size                7.81 GiB
  Current LE             2000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1
Last step: resize the ext4 filesystem
[root@ovmm32 ~]# resize2fs /dev/mapper/vg_ovmm32-lv_root
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/mapper/vg_ovmm32-lv_root is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/mapper/vg_ovmm32-lv_root is now 6210560 blocks long.

[root@ovmm32 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_ovmm32-lv_root
                       24G  7.1G   16G  32% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/xvda1            477M  152M  296M  34% /boot

Oracle VM 3 storage choices – part 1: intro, types and support


Oracle VM 3 storage choices – part 1: intro, types and support


During my most recent installation of an Oracle VM 3.2 server pool, I was pondering again all the different options to manage storage with OVM, wrote them down together with pros and cons (to help me decide which one to implement) and now I am sharing these notes.
First of all, you will need to understand the different kinds of storage that OVM can use for all its data. We will start with the easier ones and work our way up.
VM Templates, Assemblies and ISOs need to be stored somewhere. Since these files are not really needed for VMs to run, availability may not be of the utmost importance. However, if you are planning to (thin) clone off of templates, these need to be on the same storage as the virtual disks. Using physical disks for these files (especially ISOs) does not make much sense.
The shared storage for server pool is needed for cluster heartbeat and eviction decisions. It does not require a lot of performance but if this storage is unavailable the cluster gets into trouble. So availability trumps speed or throughput here. I have set this up on physical disks (both iSCSI and FC) in the past but sometimes ran into trouble when that storage was unavailable for short periods during scsi bus rescans. What happened was that access to the SCSI device was blocked during a LIP and after a few seconds nodes started to reboot because they were unable to access and/or ping the shared storage. The root cause of that is most likely something within the SAN, SCSI stack, HBA or driver and may or may not be the same for your environment. I prefer NFS for the cluster storage now.
The vm config storage still needs to be on a repo even if you are using physical disks for everything else. I have not tested what happens to running machines when this storage goes down (I imagine they’d simply keep running). And again, performance is not an issue, these are just a bunch of small xml files with the machine configs.
The actual vm disks (either physical or virtual) are the real big question when setting up an OVM system. The options are NFS vs repository vs physical disks (LUNs in a SAN). And also ethernet (or iSCSI) vs fibre channel. And not all options are supported for all file types which makes it even more complicated.
This overview should make the options clear:

ISOS, ASSEMBLIESPOOL CLUSTER STORAGEVM CONFIGVM DISKS
NFS
REPOSITORY
PHYSICAL DISK
So far, we have looked at what our options are and which are supported. Now let’s see which features are supported by each of the storage options. Performance considerations will be taken into account in another part of this blog series.
NFS-based repositories are by far the easiest method to set up. All you need is a filer somewhere and you are good to go. Backup is also really easy because you just need to mount the share and copy the data off of it. But there is no support for thin cloning so everytime you create a VM from a template or clone a VM all the data will be copied and used from the storage.

PROCON
easiest setupno thin cloning (and no cloning of running VMs)
easy backup: can directly read files, “only” need to shut down machines for consistent vdiskssneak peek into part 2: poor performance when compared with the other options
The way OCFS repositories work is by creating a cluster filesystem on a shared LUN (either iSCSI or Fibre Channel) across all nodes. So obviously you will need that LUN first and all the rest will be taken care of by the OVM manager when you create a new repository. The cluster file system allows all nodes in the server pool to access the same files at the same time and OCFS also provides a few features like reflink based think cloning of virtual disks. This comes at a small price of some added overhead and the other risk is that a problem with the cluster file system will affect all the files and VMs depending on it at the same time. I will leave the discussion of iSCSI vs FC out of this post since this will most likely depend on your existing infrastructure.

PROCON
thin cloning (based on reflinks)backup requires setup of NFS export of fs
somewhat easy setup (LUNs only need to be setup once)resize (add storage) is annoying
The final option available is to use physical disks. The term may be a bit misleading since these disks are just as physical or virtual as the others. Also, in a server pool, these cannot be disks local to any of the OVM servers since those could only be accessed from that one server which prevents VMs from failing over or being migrated to another server in the pool. So a physical disk in a server pool is a LUN (either iSCSI or Fibre Channel) on a SAN that is directly mapped to a virtual machine without a layer of OCFS in between. Creating these physical disks require new LUNs to be created and presented through the SAN which can be a number of manual steps. Fortunately, OVM supports storage-connect plugins which can be used to automate these tasks from the OVM manager GUI. Plugins exist for arrays from a number of vendors including EMC, NetApp, Hitachi, Fujitsu and of course also Oracle’s own ZFS storage appliance. I have used physical disks with a generic array before but that was a big pain and I highly recommend to using the plugin if one exists for your storage. Not only does it make management of devices easier but it also allows you to use features from your storage array to support thin cloning.

PROCON
zfs based thin cloning (on ZFS SA, other arrays may have similar features)backup either with array tools or by cloning to an nfs or repo (but currently that is not scriptable through cli)
no ocfs or vdisk overhead (more on this in part 2)(re)discovery of LUNs is not always smooth, may depend on setup
nice, easy integrated management from ovmm
In conclusion I decided against NFS for my vdisk storage for two reasons. Performance was much better with all other approaches and the ability to create snapshots of running VMs is the foundation for our backup and recovery strategy. Without consistent snapshots (or clones) of running machines the backup options are limited to regular file-based backups from withing the VM guests or require you to halt the VMs in order to take a backup of the whole machine.
Deciding between repositories and physical disks is a bit more challenging. On one hand, OCFS adds a bit of overhead and one more piece that can break to your setup. On the other hand, adding and rediscovering LUNs can also cause trouble, especially without the storage-connect plugin. Part 2 of this blog will be about benchmarking and comparing these options so one can base the decision on performance aswell as manageability.
What others say about this:

Oracle restart deprecated in 12c


oracle restart deprecated in 12c


The short summary here is: there is nothing new to see. There is just a not in the 12c docs that creates a bit of confusion. But let’s go back one step. This blog post started with (what looked like) a simple question by Christian Antognini:
Without much thinking, I shot from the hip and answered “clusterware / grid infrastructure” but I had to realize later that I really don’t know much about Oracle Restart. First of all, to me Oracle Restart has been just a stripped down installation of the clusterware. You download and install grid infrastructure to use it and in the end you use srvctl to manage your oracle items (listener, asm, databases and services) like in a RAC or RAC one node environment. Just without a real cluster. It may be more complicated than that but that’s how I see it. And that really may not be the whole truth.
My second answer was “it is still there, still works, is even documented. Had I just scrolled up a tiny bit I would have seen a note similar to this one in the 12c database upgrade guide:
Oracle Restart is deprecated in Oracle Database 12c. Oracle Restart is currently restricted to manage single-instance Oracle databases and Oracle ASM instances only, and is subject to desupport in future releases.
And there was my second mistake. I confused deprecated with desupported. I first thought they simply abondoned the name and are now calling it clusterware light or something like that. I really had to google for the word “deprecated” to understand what it really means. I now interpret the note as saying: “Don’t be surprised if a future version of Oracle does not have the Restart feature”. There is no replacement that I know of from Oracle today but I would hope and assume that they would provide an alternative if and when that happens. And if not, we’ll just recycle those good old start/stop scripts that we have relied on before 11.2 came along.
So I guess the real answer to Chris should have been: “Excellent question. There is no alternative for single instances as of today. We may have to go back to our own or 3rd party scripts at some point”. Think more before you tweet. Lesson learned.
Since all this was so confusing (and still is), I decided to clear my head by checking it out and installing Oracle Restart on my lab machine that I had just set up. Just to verify that Restart is still fine. I had already installed the database software and set up a database and listener. Usually, you would install Restart first and the database after that because then DBCA would register with it for you. The process is documented very well in the 12c admin guide.
Download the clusterware from OTN, unzip, and start with runInstaller. I opted to install the grid infrastructure software only and went with the defaults on all other pages, ignoring some warnings. With RAC installs, I prefer to run the grid stuff as a seperate grid user but with this setup, I was lazy and installed and ran as the oracle user.
12c_install_restart_01
I ignored this warning I received because I did not set up an extra group for asmadmin which would make sense if you have seperate accounts for grid and database.
12c_install_restart_02
12c_install_restart_03
Yury Velikanov mentioned in his 12c gi install blog that the installer now offers an option asking for your root password so it could run the root.sh scripts for you. I was a bit disappointed to learn that for some reason I still had to log in as root and run it the old fashioned way. I guess it has to do with me just doing the “Install Grid Infrastructure Software Only” option.
12c_install_restart_04
The output of the root.sh script was nice enough to point me to the script I needed to run to turn this into a true standalone clusterware install for Restart.
[root@ora12c ~]# /u01/app/12.1.0/grid/root.sh 
Performing root user operation for Oracle 12c 

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:
/u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/crs/install /u01/app/12.1.0/grid/crs/install/roothas.pl


To configure Grid Infrastructure for a Cluster execute the following command as oracle user:
/u01/app/12.1.0/grid/crs/config/config.sh
This command launches the Grid Infrastructure Configuration Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.
And so I executed that script.
[root@ora12c ~]# /u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/crs/install /u01/app/12.1.0/grid/crs/install/roothas.pl
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE 
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node ora12c successfully pinned.
2013/07/02 13:41:04 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'

ora12c     2013/07/02 13:41:26     /u01/app/12.1.0/grid/cdata/ora12c/backup_20130702_134126.olr
2013/07/02 13:42:33 CLSRSC-327: Successfully configured Oracle Grid Infrastructure for a Standalone Server
And if I had done this before creating my database with dbca it would have picked it up automatically and registered for me. But it was too late for that and so I had to add the database and listener manually like this:
[oracle@ora12c ~]$ srvctl add database -db ORCL12 -oraclehome /u01/app/oracle/product/12.1.0/dbhome_1
[oracle@ora12c ~]$ srvctl status database -db ORCL12
Database is not running.
In fact it was running from an earlier manual STARTUP command which srvctl could not know about. So I shut it down manually first and started it again properly through srvctl.
[oracle@ora12c ~]$ sqlplus sys/ as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jul 2 13:51:49 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password: 

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

[oracle@ora12c ~]$ srvctl start database -db ORCL12
The listener was a similar story, it was already running and adding it generated an error. So I stopped it before adding it to gi. You could also start the listener from the grid home by setting the ORACLE_HOME environment variable to the grid home but I had already configured it in the other home so I just used that.
[oracle@ora12c ~]$ srvctl add listener
PRCN-2061 : Failed to add listener ora.LISTENER.lsnr
PRCN-2065 : Port(s) 1521 are not available on the nodes given
PRCN-2067 : Port 1521 is not available across node(s) "ora12c"

[oracle@ora12c ~]$ lsnrctl stop

LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 02-JUL-2013 13:54:04

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ora12c)(PORT=1521)))
The command completed successfully
[oracle@ora12c ~]$ srvctl add listener -listener LISTENER
[oracle@ora12c ~]$ lsnrctl status
[oracle@ora12c ~]$ srvctl start listener
I verified everything by rebooting the whole server and just as expected the listener and database instance were started automatically after the reboot.

  How to Change Instance Type & Security Group of EC2 in AWS By David Taylor Updated April 29, 2023 EC2 stands for Elastic Compute Cloud...