Environment
SFHA/DR = 6.1
OS = RHEL 6.3
Query
Any technote is available for the subjected ERROR ? I faced this when I come back(Switchover) to primary site from DR. The ERROR occured while online RVGprimary Resource
Environment
SFHA/DR = 6.1
OS = RHEL 6.3
Query
Any technote is available for the subjected ERROR ? I faced this when I come back(Switchover) to primary site from DR. The ERROR occured while online RVGprimary Resource
Environment
OS = RHEL 6.3
SFHA/DR
JavaConsole version = VCS_Cluster_Manager_Java_Console_6.0.1_for_Linux.tar.gz
Query
It seems that JavaConsole taking time to login.
Furthermore sometime it happens that I OFFLINE a resource from Java Console and Java Console seems doing nothing(means on resource no arrow showing downwords) and respond after a while but the resource keep Online.
Segue link para a Newsletter desse mês. Tem dois Withepappers muito bons!
Vale muito a pena a leitura
http://symantecemail.com/2014/march/management/new...
Hi,
Can anyone please let me know does Veritas Volume Replicator is still available for Solaris?
Server details as follows:
One T4-2 server
2 CPU
OS: Solaris
Non Clustered
Thanks
When disk group disabled and when the diskgroup resource faults ? (Share the symptoms please)
Execution command:
vxconfigrestore -p -l /tmp/backups dg01
returns an error message
VXVM vxconfigrestore ERROR V-5-2-3705 Diskgroup dg01 is currently online imported
executing command:
vxdg deport dg01
Run again vxconfigrestore
vxconfigrestore -p -l /tmp/backups dg01
Returns an error message
VXVM vxconfigrestore ERROR V-5-2-3703 Diskgroup dg01 appears to be a deported disk group.
Who can help me
Hello Everybody
I am changing my Storage for a new one and I have to transfer my data to this new hardware.
I believe that mirroring the actual LUN´s and new LUN´s, and after that, remove the old LUN´s, is the best way to change the data to the new Storage.
My doubt is: how can I do that using Veritas Volume Manager ?
This is my scenario:
OS: Solaris 10
Veritas Volume Manager:
pkginfo -l VRTSvxvm |grep VERSION
VERSION: 5.0,REV=04.15.2007.12.15
Veritas Cluster:
pkginfo -l VRTSvcs | grep VERSION
VERSION: 5.0
As per your support and Customer Services departments I've been told I need to engage an engineer on here.
Please Asssit.
Hi,
We are planning for storage Migration from EMC utility 2 to utility 3 storage, all the filesystem are vxfs we are planning to mirror the old disk and new disk and sync the data and remove the old disk from mirror. Whether the new LUN size should be the same for creating the mirror and sync the data.
We're seeing disk group import times of 5+ minutes for disk groups that contain more than 25 LUN's.
I think the issue is related to the number of paths per LUN, which is 4, so VxVM is potentially scanning for N x 4 paths?
Has anyone else seen this behavior and more impotantly what was done to reduce the disk group import times?
Environment
Hi all,
I attached a new enclsoure Netapp FAS6250 but something is wrong.
I suppose that ENCLR_NAME and ENCLR_SNO are not correctly valorized.
[root@as-lnx150 ~]# vxdmpadm listenclosure all
ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE LUN_COUNT
=======================================================================================
emc0 EMC 000292601747 CONNECTED A/A 32
disk Disk DISKS CONNECTED Disk 1
fas62800 FAS6280 200000580654 CONNECTED ALUA 45
svm_unix0 svm_unix Avixz$EG07l5 CONNECTED ALUA 32
and output from vxdisk -oalldgs list
fas62800_42 auto:cdsdisk oratesta_dg06 oratesta_dg online thinrclm
fas62800_43 auto:cdsdisk oratesta_dg07 oratesta_dg online thinrclm
fas62800_44 auto:cdsdisk oratesta_dg08 oratesta_dg online thinrclm
svm_unix0_0 auto:cdsdisk - - online thinrclm
svm_unix0_1 auto:none - - online invalid thinrclm
svm_unix0_2 auto:none - - online invalid thinrclm
Environment:
Hi,
I've 1 server & 2 Sun StorageTek 2540 storage connected through SAN. Solaris-10 is running on the server & VxVM 6.0 has been installed.
But i'm getting only one enclosure name for both the storages.
Please suggest.
Dear,
I'm trying to vxmake replicated (using BCV from EMC) diskgroups.
On the source system, vxvm is at version 6.0.1 with a FS layout 7.
The target is running VxVM 5.1SP1RP2 so the FS layout v7 should be alright.
During the import process, a vxmake is issues but it's complaining about these things:
VxVM vxmake ERROR V-5-1-327 variable name not recognized, context:
thin=off
thinreclaim=off
May I safely remove these variable from the text file used for the import?
Thanks in advance for your help and advises.
I am trying to mount data base storage on a whole root zone, on Solaris 10.
This is my zonecfg:
zonename: test1
zonepath: /export/Zones/test1
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
hostid: 5654fe54
fs:
dir: /etc/vx/licenses/lic
special: /etc/vx/licenses/lic
raw not specified
type: lofs
options: []
fs:
dir: /etc/globalzone
special: /etc/nodename
raw not specified
type: lofs
options: []
net:
address: 192.169.0.5
physical: ce0
defrouter: 192.169.0.1
device
match: /dev/vxportal
device
match: /dev/fdd
device
match: /dev/vx/rdsk/dgTEST5/vlTEST5_01
device
match: /dev/vx/dsk/dgTEST5/vlTEST5_01
I can manuallly mount the storage per this article:
https://sort.symantec.com/public/documents/sfha/6....
I started the vxfsldlic service per this article:
https://sort.symantec.com/public/documents/sfha/6....
The storage does not mount upon zone reboot.
Thanks for you assistance.
I am running Verias Volume manager on HPUX, version is B.05.10.01. There is embedded java that is version 1.6.0.06. Is there a more recent version that would bump up my Java to a higher version?
Thanks
Also, removing the embedded java would work.
hi,
i have got an interesting one here. currently,
my_DiskGroup disabled 1379410956.28.host
but all the volumes of this diskgroup are mounted. df is working
vxprint does not work on this diskgroup.
i have tried to "vxprint import my_DiskGroup" again but it does not work.
I don't know how it got to this state. but i suspect the SAN disk might have been lost at some point.
anyone have any idea to make the DG online without umounting the filesystems?
The version of VXVM is 5.1SP1 btw
Regards,
Ida
Hi.
Solaris10 SPARC 5.0MP3 RP5.
llt gab vxfen and vcs is not starting at boot in Solaris10.
During the boot i see the bellow message on the console.
-------------------------------------------------------------------------------------------
VxVM sysboot INFO V-5-2-3390 Starting restore daemon...
LLT INFO V-14-1-10009 LLT Protocol available
GAB INFO V-15-1-20021 GAB available
----------------------------------------------------------------------------------------------
But the services are not up.
# /etc/init.d/llt status
LLT: is loaded but not configured.
# /etc/init.d/gab status
GAB: module not configured
But if i issue the command explicitly then they will start.
# /etc/init.d/llt start
Starting LLT...
Starting LLT done.
# /etc/init.d/llt status
LLT: is loaded and configured.
# /etc/init.d/gab start
Starting GAB...
Starting GAB done.
# /etc/init.d/gab status
GAB: module is configured
Now here i am facing problem with vxfen.
In the log i see the message "VCS CRITICAL V-16-1-10037 VxFEN driver not configured. Retrying..."
I tried sevaral times to start vxfen, but no luck.
But if i do "vxfenconfig -c" then it will come up.
# /sbin/vxfenconfig -c
VXFEN vxfenconfig NOTICE Driver will use SCSI-3 compliant disks.
And now the cluster is UP with "hastart".
I have to do this all the time in all the nodes whenever nodes/cluster reboots.
Someone please suggest what could be the issue and why the cluster services are not running at boot.
Thanks & Regards,
Shashi Kanth.
Hello,
We have SFHA 6.0 installed on Solaris 10. We have changed the hostname of the solaris server and hence I need to change the diskgroup id of a particular diskgroup. What will be the command or procedure for this?
I have enabled the newhostname using vxdctl hostid, but the diskgroup does not changes after this.
Dear Symantec Colleague,
One of the Disk is giving Error when am trying to take it in Veritas Control.
root@gumas1n> /etc/vx/bin/vxdisksetup -i c2t0d0
VxVM vxdisksetup ERROR V-5-2-57 /dev/vx/rdmp/c2t0d0: Device does not match the kernel configuration
Please share the ideas .to resolve this issue.
I tried to configure through vxdiskadm command but not working
Hi,
recoveryoption=fixedretry retrycount=n
How often does it try to send I/O request?
Thank you