Hi Felix,
cheers, I would appreciate that very much!
Maybe ONTAP 9's "vol move" can handle it hassle-free...
Peter
Hi Felix,
cheers, I would appreciate that very much!
Maybe ONTAP 9's "vol move" can handle it hassle-free...
Peter
HI
i guess you also imported the Snapdrive module, correct?
i'm asking to dobule check because some PS commands are not integrated with SnapCenter itself, and the new-sdlun is one of them.
This is a command that belong to Snapdrive
With the latest release, SDW can be installed as integrated mode or standalone mode. in integration mode only the intergrated commands are allowed to be executed and other commands will show a an error "This command is not supported in Snap Center plugin mode".
You can enable these commands to be working by enabling then using commandlet
Set-StandaloneCommand.
Following are the example of enabling and disabling the commands not integrated with Snap center.
Disable Command
Set-StandaloneCommand
Enable Command
Set-StandaloneCommand -Enable
Hello,
Sorry, I did leave out that important detail that i am getting the message on the snap center server itself
New-SdLun : This command is not supported in SnapCenter Plug-in mode
Thanks for the info relating to this, I am not able to enable the command on the snap center server
PS C:\Windows\system32> Set-StandaloneCommand
Set-StandaloneCommand : The term 'Set-StandaloneCommand' is not recognized as the name of a cmdlet, function, script
file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is
correct and try again.
At line:1 char:1
+ Set-StandaloneCommand
+ ~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Set-StandaloneCommand:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
Hi David
are you trying to do that on the plugin host or on the Snapcenter host?
anyway try to do in this way
import both modules, snapcenter and snapdrive
then open the connection with the command
open-smconnection ..
instead the Set-StandaloneCommand use the following
set-sdsettings $true
and then
new-sdlun
let me know
Also, forgot to mention, you can use the new-sdstorage to provide LUN insted using the new-sdlun. this just to avoid all the previous mentioned steps and keep your script more tidy
regards
new sd-storage is exactly what I needed - thank you!
Hi everybody,
I have created a role and user for VSC on ONTAP 9.0 cluster via RBAC user creator. The user has only discovery permissions, because it is not used for backup, restore or cloning operations.
The role has the following capabilities:
Hello we had a LUN "disappear" (Used as a log drive, L: ) and after that happened SME stopped working. Even after Netapp support, we could never find the LUN and they told us to recreate a LUN. We created a new lun with the same drive letter but a different LUN name. Netapp wont help us with SME because of the support contract. Thanks. Anyways a task scheduler is set to run the SME jobs and the task is ok and starts off the jobs but the jobs fail to backup the databases.
This is the email we get:
“HA Group Notification from NETAPP (CLIENT APP ERROR Backup: SME Version 7.1: (111) on MBX: SnapManager for Exchange online backup failed. (Exchange 14.3.123.4) Error code: 0x80042306) WARNING”
Only thing that stands out is that we have SME 7.2 on the exchange server so why is it reporting 7.1 here?
The error I find in the log is:
[10:23:43.740] Error in calling VSS API: Error code = 0x80042306
Error description: VSS_E_PROVIDER_VETO
And I attached a log file I found for a failed job.
Any ideas?
hi
VSS_E_PROVIDER_VETO is a generic error and you need to investigate more on the possile cause.
This means check the application event logs and the snapdrive logs at least...
- did you create the new lun using Snapdrive? If not please be sure that the list containst all the requested partitions (Primary partition and the MSR partition)
you can use diskpart to check that https://technet.microsoft.com/en-us/library/cc766465(v=ws.10).aspx
- Disable the AntiVirus engine just to avoid that it is running and maybe put a VETO during the backup procedure
- When a VSS framework is involved in a backup procedure all the vss components needs to be in a clean status otherwise the backup will fail.
for that please check the output of the command "vssadmin list writers"
- Please check the Snapdrive logs and the MS event logs at the same time of the VSS error to see if you have some more information on the error.
Try also to run a backup without including the "affected" lun
if that won't help i think the next step is to enable an collect a VSS trace...
You defined the Discovery role privileges for the directly connected SVMs, is that correct?
The Discovery role enables you to discover all Storage Virtual Machines (SVMs, formerly known as Vservers) that are directly connected to VSC.
The thing is that the mentioned getDedupeSizeShared(...) is used only if you are trying to discover a not direct connected vserver.
Can you please check?
- Yes LUN was created with Snapdrive
- AV didnt seem to change things
- vssadmin list writers - no errors (After the change made below)
We deleted some old snapsnots that might have been causing the issue. Weird thing is that now, I am not seeing any new job reports or errors but its showing the last run was today. The last report is from last month.
Then suddenly the old volume shows up now but of course there is no LUN connecting to it.
There are several SVMs configured at the cDOT cluster. But only one SVM (called vmware) is registered to the VSC. The discovery role was created inside the SVM "vmware". Also, the cluster is not registered to the VSC.
Who is "showing that the last run was today".. ? the windows task scheduler
Did you try to run a manual backup to see if it will create a new report in the installation folder?
did you find some information in the event viewer?
There was a problem with the security properties after copying back the "snapcreator.properties" file.
Recreated the file, this solved our problem.
I found this event after kicking off a job, does this mean it doesnt backup since the mailbox server is not the active mailbox?
Job : new-backup -Server 'USDAG' -ClusterAware -GenericNaming -ManagementGroup 'Daily' -RetainDays 4 -RetainUtmDays 2 -UseMountPoint -MountPointDir 'C:\SnapMgrMountPoint' -ActiveDatabaseOnly -BackupTargetServer USMAILBOX -RemoteAdditionalCopyBackup $True -RetainRemoteAdditionalCopyBackupDays 4 -AdditionalCopyBackupDAGNode USMAILBOX
The operation executed with the following results.
Details: new-backup cmdlet will exit as it is not running in the Active node : USMAILBOX
Stack Trace: at System.Management.Automation.Internal.PipelineProcessor.SynchronousExecuteEnumerate(Object input, Hashtable errorResults, Boolean enumerate)
at System.Management.Automation.PipelineOps.InvokePipeline(Object input, Boolean ignoreInput, CommandParameterInternal[][] pipeElements, CommandBaseAst[] pipeElementAsts, CommandRedirection[][] commandRedirections, FunctionContext funcContext)
at System.Management.Automation.Interpreter.ActionCallInstruction`6.Run(InterpretedFrame frame)
at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)
I would think it would still produce some kind of report in SME but it does not.
Hi
In the case of DAG, if a job is scheduled with -ClusterAware , the job runs only if the host in which it is scheduled is the active node of the DAG
Hi
the GetDedupeSizeShared is not a real zapi call hece the tool uses the system-cli command with the options
set diag
sis stat -vserver <vservername> -volume <volumename> -field shared-data
you can try to run the same command with the target user.
then as far as i can see you did not define all the required privileges for the discovery role.
Please dobule check. For example you have to add the system node run as all access and other.
here the list on page 28
https://library.netapp.com/ecm/ecm_download_file/ECMLP2371573
Hello again,
I´m happy to present a solution for cDot environment:
1. Create a new Resource Pool for the destination aggregate on OCUM -> Storage -> Resource Pools
2. Add the new Resource Pool to the Storage Policy in SnapProtect -> Storage Policies -> Policy xy -> Properties -> Provisioning -> Add Resource Pool
3. Move the volume from one aggregate to the other on CLI or GUI
4. Check if new snapshots have been transmitted with theire schedule
5. Remove the old Resource Pool from the Storage Policy in SnapProtect -> Storage Policies -> Policy xy -> Properties -> Provisioning -> Remove Resource Pool
I can´t give any guarantee for your environment, but in our environment it works well and without any influence.
Would be glad to hear, that it worked for you too.
Regards,
KFU
Hi Felix,
great news, thank you!
We will give this a shot as soon as our cluster gets new shelves.
Cheers
Peter