Friday, December 4, 2015

Oracle VM: Server Not Owned By This Manager

Oracle VM: Server Not Owned By This Manager

So if you've ever lost your Oracle VM Manager and not been able to recover it because you never took a backup, shame on you, you should have followed the procedure documented here 4.12 Backing Up Oracle VM Manager; says he having been in this situation many times.

For the hundreds of you left I'll walk you through the options to get out of the mess you're in.

The first option and only currently supported option is to re-install the OVM Manager and specify the UUID on installation as discussed in 4.13 Restoring Oracle VM Manager.

If like me you've done neither and you already have another OVM Manager and you just want to move the old servers to it don't panic all is not lost.

When you install an Oracle Virtual Server it creates a number of Berkeley DB files in /etc/ovs-agent/db.

[root@someserver1 ~]# ls -ltrh /etc/ovs-agent/db/
total 44K
-rw-r--r-- 1 root root 12K Jun 24  2013 repository
-rw-r--r-- 1 root root 12K Mar 17 16:47 aproc
-rw------- 1 root root 12K Jun 13 17:48 exports
-rw-r--r-- 1 root root 12K Aug  6 15:49 server


[root@someserver1 ~]# file server
server: Berkeley DB (Hash, version 8, native byte-order)

So what I hear you ask, well while unsupported these files can be manipulated using the OVS command line utilities.

[root@someserver1 ~]# ls -ltrh /usr/sbin/ovs-agent*
-rwxr-xr-x 1 root root 1.2K Feb  5  2013 /usr/sbin/ovs-agent-userdel
-rwxr-xr-x 1 root root 1.4K Feb  5  2013 /usr/sbin/ovs-agent-useradd
-rwxr-xr-x 1 root root 3.6K Feb  5  2013 /usr/sbin/ovs-agent-rpc
-rwxr-xr-x 1 root root 3.1K Feb  5  2013 /usr/sbin/ovs-agent-passwd
-rwxr-xr-x 1 root root 2.1K Feb  5  2013 /usr/sbin/ovs-agent-keygen
-rwxr-xr-x 1 root root 2.6K Feb  5  2013 /usr/sbin/ovs-agent-fake-uuid
-rwxr-xr-x 1 root root 2.3K Feb  5  2013 /usr/sbin/ovs-agent-dlm
-rwxr-xr-x 1 root root 5.6K Feb  5  2013 /usr/sbin/ovs-agent-db

For example ovs-agent-db allows you to both query and update items in these files;

[root@someserver1 ~]# /usr/sbin/ovs-agent-db --help
usage: ovs-agent-db [option] ...

Examples:
  ovs-agent-db create_db db
  ovs-agent-db delete_db db
  ovs-agent-db dump_db db
  ovs-agent-db truncate_db db
  ovs-agent-db update_db db value
  ovs-agent-db read_item db key
  ovs-agent-db delete_item db key
  ovs-agent-db write_item db key value
  ovs-agent-db upgrade_databases
  ovs-agent-db get_cluster_db_home

Use python syntax to specify the "value" parameters:
  * None: "None"
  * string: "'some string'"
  * number: "1234.5678", "0x1234"
  * boolean: "True", "False"
  * list: "['foo', 1234]"
  * tuple: "('foo', 1234)"
  * dict: "{'foo': 'bar', 1: 2}"

options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit
  -d DB_HOME, --db-home=DB_HOME
                        specify db home [default: /etc/ovs-agent/db]
  -c, --cluster-db-home
                        use cluster db home

So to dump the contents of the server DB file you would simply do;

[root@someserver1 ~]# /usr/sbin/ovs-agent-db dump_db server
{'cluster_state': 'DLM_Ready',
 'clustered': True,
 'is_master': False,
 'manager_core_api_url': 'https://{some_encrypted username}:{some_encrypted_password}@{some_ip_address}:7002/ovm/core/OVMManagerCoreServlet',
 'manager_uuid': '0004fb000001000dontdothisforreal',
 'node_number': 1,
 'pool_alias': 'MyServerPool1',
 'pool_member_ip_list': ['{someserver1_ip}', '{someserver2_ip}', '{someserver3_ip}'],
 'pool_uuid': '0004fb000002000018facb931f623134',
 'pool_virtual_ip': '{someserver_pool_ip}',
 'poolfs_nfsbase_uuid': '',
 'poolfs_target': '/dev/mapper/360014059ac279e4d573bd4c38d9c9cd6',
 'poolfs_type': 'lun',
 'poolfs_uuid': '0004fb000005000078a55251d7d969ad',
 'registered_hostname': '{some_server}',
 'registered_ip': '{some_server_ip}',
 'roles': set(['xen', 'master', 'utility']),
 'stat_interval': 160}

The values in {} have been changed to protect the innocent and they also don't appear in the output.

Before you can change move these servers over to the new manager you need to find your current OVM Manager UUID, you can get this by clicking on "Help" followed by "About" in the OVM Manager user interface.


The UUID is also available on the OVM Manager file system in the /etc/sysconfig/ovmm file.

[root@ovmmserver ~]# cat /etc/sysconfig/ovmm
RUN_OVMM=YES
JVM_MEMORY_MAX=4096m
JVM_MAX_PERM=512m
DBBACKUP=/u01/app/oracle/mysql/dbbackup
DBBACKUP_CMD=/opt/mysql/meb-3.8/bin/mysqlbackup
UUID=0004fb0000010000d1b7eca67025e1c1

Now onto actually changing the values but checking that you get your commands right first;

[root@someserver1 ~]# ovs-agent-db read_item server manager_uuid
'0004fb000001000dontdothisforreal'

[root@someserver1 ~]# ovs-agent-db write_item server manager_uuid "'0004fb0000010000d1b7eca67025e1c1'"

[root@someserver1 ~]# ovs-agent-db read_item server manager_uuid
'0004fb0000010000d1b7eca67025e1c1'

Don't go patting yourself on the back, were not done yet.  So now we need to re-discover the server in the new OVM Manager but before we do we need to fix the mounted pool file systems and repositories and make them think that they are also owned by the same OVM Manager.

So let's look at the mounted file systems;

[root@someserver1 ~]# df -kh
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3             421G  1.2G  398G   1% /
/dev/sda1              99M   28M   67M  29% /boot
tmpfs                 286M     0  286M   0% /dev/shm
none                  286M   40K  286M   1% /var/lib/xenstored
/dev/mapper/360014059ac279e4d573bd4c38d9c9cd6
                       20G  271M   20G   2% /poolfsmnt/0004fb000005000078a55251d7d969ad
/dev/mapper/360014057549345adab90d4b2ddab8ed6
                     1000G  590G  411G  59% /OVS/Repositories/0004fb000003000074aaa55a9bad117a
someserver1.example.com:/nfs
                      421G  1.2G  398G   1% /OVS/Repositories/0004fb00000300000284796418b4eb9d

You'll notice that on my system I have an NFS mount back to itself, I'll explain this in another blog but essentially I am presenting the wasted space on /dev/sda back to the server so that it can be put to good use.

Server Pool Filesystem

Inside each of the mounted pool file systems you will find a .ovspool file which contains the value OVS_POOLFS_MGR_UUID which represents the old OVM Manager UUID,  This needs to be changed too.

[root@someserver1 ~]# cat /poolfsmnt/0004fb000005000078a55251d7d969ad/.ovspoolfs
OVS_POOLFS_UUID=0004fb000005000078a55251d7d969ad
OVS_POOLFS_MGR_UUID=0004fb000001000042f6cecb9c3dc70f
OVS_POOLFS_VERSION=3.0
OVS_POOLFS_POOL_UUID=0004fb000002000018facb931f623134
OVS_POOLFS_LUN_UUID=360014059ac279e4d573bd4c38d9c9cd6

Each Pool Filesystem contains other Berkeley DB files that do not need to be changed, but I'm listing them here for completeness.

[root@someserver1 ~]# ls -ltrh /poolfsmnt/0004fb000005000078a55251d7d969ad/db
total 36K
-rw------- 1 root root 12K Jul 22  2013 monitored_vms
-rw------- 1 root root 12K Feb 18 16:01 server_pool_servers
-rw------- 1 root root 12K Aug  5 17:10 server_pool

Reminder to dump the contents of these files use ovs-agent-db command but this time you have to specify the --db-home parameter which points to the location of the database files.

File: server_pool

This file describes the server pool characteristics.

[root@someserver1 ~]# ovs-agent-db --db-home=/poolfsmnt/0004fb000005000078a55251d7d969ad/db dump_db server_pool
{'auto_remaster': True,
 'pool_alias': 'MyServerPool1',
 'pool_master_hostname': '{someserver1.example.com}',
 'pool_member_ip_list': ['{someserver1_ip}', '{someserver2_ip}', '{someserver3_ip}'],
 'pool_uuid': '0004fb000002000018facb931f623134',
 'pool_virtual_ip': '{someserver_pool_ip}'}

File: server_pool_servers

This file contains details about the servers which are part of the pool.

[root@someserver1 ~]# ovs-agent-db --db-home=/poolfsmnt/0004fb000005000078a55251d7d969ad/db dump_db server_pool_servers
{'someserver1.example.com': {'is_master': False,
                               'node_number': 0,
                               'registered_ip': '{someserver1_ip}',
                               'roles': set(['xen', 'utility'])},
 'someserver2.example.com': {'is_master': False,
                               'node_number': 1,
                               'registered_ip': '{someserver2_ip}',
                               'roles': set(['xen', 'master', 'utility'])},
 'someserver3.example.com': {'is_master': False,
                               'node_number': 2,
                               'registered_ip': '{someserver3_ip}',
                               'roles': set(['xen', 'master', 'utility'])}}

File: monitored_vms

Finally the VM guests being monitored by this pool.

[root@someserver1 ~]# ovs-agent-db --db-home=/poolfsmnt/0004fb000005000078a55251d7d969ad/db dump_db monitored_vms
{'0004fb00-0006-0000-54df-920567f90cd6': {'repo_id': '0004fb000003000074aaa55a9bad117a',
                                          'vm_id': '0004fb000006000054df920567f90cd6'},
 '0004fb00-0006-0000-63e2-a5ccff965b20': {'repo_id': '0004fb00000300000284796418b4eb9d',
                                          'vm_id': '0004fb000006000063e2a5ccff965b20'}}

Repository Filesystem

Inside each of the mounted repository file systems you will find a .ovsrepo file which contains the 
value OVS_REPO_MGR_UUID which represents the old OVM Manager UUID,  This needs to be changed too.  

[root@someserver1 ~]# cat /OVS/Repositories/0004fb000003000074aaa55a9bad117a/.ovsrepo
OVS_REPO_UUID=0004fb000003000074aaa55a9bad117a
OVS_REPO_VERSION=3.0
OVS_REPO_MGR_UUID=0004fb0000010000d1b7eca67025e1c1
OVS_REPO_ALIAS=MY QNAP TS-119

Having made all those changes its now time to re-discover the servers.  At this point I didn't choose to use the re-discovery option, instead I chose to first remove the server and then re-add it from within the UI.

During the discovery OVM Manager will also discover the storage and server pools that this server knows about, however it will not use the 'pool_alias' as was shown when we ran the 'ovs-agent-db dump' command.  The new server pool will be shown in the UI with the 'pool_uuid' value, you can edit this after discovery.


If you don't change any of the values correctly you could end up in the same position as me with a slightly schizophrenic environment where the repositories claim to be owned by another manager so be very careful but don't worry too much as they can be changed after the event by refreshing them from the UI.



Good luck and make sure you follow the proper procedures in future.

No comments:

Post a Comment

  How to Change Instance Type & Security Group of EC2 in AWS By David Taylor Updated April 29, 2023 EC2 stands for Elastic Compute Cloud...