Welcome to part 2. Here is a quick start on how to set up NDMP on the Celerra.
I’m assuming that all the steps in “Part 1” was accomplished successfully.
Note that you don’t need to shut down or restart the Data Movers when connecting the FC cable between the Data Domain and the Celerra.
In case you need to shut down the Data Movers that will be connected to the Tape Library Unit (TLU), then here is the steps:
Connect to the Control Station using SSH. or you can run the commands from the Unisphere console.
Halt every Data Mover using the command:
$ server_cpu <datamovername> –halt –monitor now
Sample output:
[nasadmin@ns20 ~]$ server_cpu server_2 -halt -monitor now
server_2 : 33.doneThe default option of reboot will try warm initially. In case failure of warm reboot, command will cold reboot the Data Mover.
Use the command /nas/sbin/getreason to make sure the status of the Data Mover is “powered off”.
Sample output:
[nasadmin@ns20 ~]$ /nas/sbin/getreason
10 – slot_0 primary control station
– slot_2 powered off
5 – slot_3 contacted
Connect the Powered off Data Mover to the TLU, then turn on the TLU and verify it is online.
Restart the Data Mover using the command:
$ server_cpu <datamovername> –reboot –monitor now
Sample output:
[nasadmin@ns20 ~]$ server_cpu server_2 -reboot -monitor now
server_2 : reboot in progress 0.0.0.0.0.0.0.1.1.1.1.3.3.4.doneUse the command /nas/sbin/getreason to make sure the status of the Data Mover is “contacted”.
Sample output:
[nasadmin@ns20 ~]$ /nas/sbin/getreason
10 – slot_0 primary control station
5 – slot_2 contacted
5 – slot_3 contacted
Part 2: Configuring NDMP on the Celerra with the attached VTL.
1- Now that the TLU (VTL on Data Domain) is connected to the Celerra Data Mover, you can verify that the Date Mover can recognize the new connected TLU using the command:
$ server_devconfig <datamovername> –probe -scsi –nondisks
Sample output:
[nasadmin@ns20 ~]$ server_devconfig server_2 -probe -scsi -nondisks
server_2 :
SCSI non-disk devices :
chain= 0, scsi-0 : no devices on chain
chain= 1, scsi-1 : no devices on chain
chain= 2, scsi-2 : no devices on chain
chain= 3, scsi-3 : no devices on chain
chain= 4, scsi-4 : no devices on chain
chain= 5, scsi-5 : no devices on chain
chain= 6, scsi-6 : no devices on chain
chain= 7, scsi-7 : no devices on chain
chain= 8, scsi-8 : no devices on chain
chain= 9, scsi-9 : no devices on chain
chain= 10, scsi-10 : no devices on chain
chain= 11, scsi-11 : no devices on chain
chain= 12, scsi-12 : no devices on chain
chain= 13, scsi-13 : no devices on chain
chain= 14, scsi-14 : no devices on chain
chain= 15, scsi-15 : no devices on chain
chain= 16, scsi-16 : no devices on chain
chain= 17, scsi-17 : no devices on chain
chain= 18, scsi-18 : no devices on chain
chain= 19, scsi-19 : no devices on chain
chain= 20, scsi-20 : no devices on chain
chain= 21, scsi-21 : no devices on chain
chain= 22, scsi-22 : no devices on chain
chain= 23, scsi-23 : no devices on chain
chain= 24, scsi-24 : no devices on chain
chain= 25, scsi-25 : no devices on chain
chain= 26, scsi-26 : no devices on chain
chain= 27, scsi-27 : no devices on chain
chain= 28, scsi-28 : no devices on chain
chain= 29, scsi-29 : no devices on chain
chain= 30, scsi-30 : no devices on chain
chain= 31, scsi-31 : no devices on chain
chain= 32, scsi-32
stor_id= celerra_id=
tid/lun= 6/4 type= jbox info= STK L180 0306
tid/lun= 6/5 type= tape info= IBM ULTRIUM-TD3 8711
tid/lun= 6/6 type= tape info= IBM ULTRIUM-TD3 8711chain= 33, scsi-33 : no devices on chain
chain= 34, scsi-34 : no devices on chain
chain= 35, scsi-35 : no devices on chain
chain= 36, scsi-36 : no devices on chain
chain= 37, scsi-37 : no devices on chain
chain= 38, scsi-38 : no devices on chain
chain= 39, scsi-39 : no devices on chain
chain= 40, scsi-40 : no devices on chain
chain= 41, scsi-41 : no devices on chain
chain= 42, scsi-42 : no devices on chain
chain= 43, scsi-43 : no devices on chain
chain= 44, scsi-44 : no devices on chain
chain= 45, scsi-45 : no devices on chain
chain= 46, scsi-46 : no devices on chain
chain= 47, scsi-47 : no devices on chain
chain= 48, scsi-48 : no devices on chain
chain= 49, scsi-49 : no devices on chain
chain= 50, scsi-50 : no devices on chain
chain= 51, scsi-51 : no devices on chain
chain= 52, scsi-52 : no devices on chain
chain= 53, scsi-53 : no devices on chain
chain= 54, scsi-54 : no devices on chain
chain= 55, scsi-55 : no devices on chain
chain= 56, scsi-56 : no devices on chain
chain= 57, scsi-57 : no devices on chain
chain= 58, scsi-58 : no devices on chain
chain= 59, scsi-59 : no devices on chain
chain= 60, scsi-60 : no devices on chain
chain= 61, scsi-61 : no devices on chain
chain= 62, scsi-62 : no devices on chain
chain= 63, scsi-63 : no devices on chain
2- After that you need to save the new added devices to the Celerra Server DB using the command:
$ server_devconfig <datamovername> -create -scsi –nondisks
Sample output:
[nasadmin@ns20 ~]$ server_devconfig server_2 -create -scsi -nondisksDiscovering storage (may take several minutes)
server_2 : done
3- Now list the devices and their addresses using the command:
$ server_devconfig <datamovername> -list -scsi –nondisks
Sample output:
[nasadmin@ns20 ~]$ server_devconfig server_2 -list -scsi -nondisks
server_2 :
Scsi Device Table
name addr type info
tape3 c32t6l6 tape IBM ULTRIUM-TD3 8711
jbox1 c32t6l4 jbox STK L180 0306
tape2 c32t6l5 tape IBM ULTRIUM-TD3 8711
4- Next step is to create a user account on the Data Mover. We need this user in the Backup software to access the NDMP Tape Devices.
For this step you need to switch user to root by typing:
$ su –
5- After logging on as root, create a new user using the following command:
# /nas/sbin/server_user <datamovername> -add -md5 –password <newusername>
Sample output:
[root@ns20 ]# /nas/sbin/server_user server_2 -add -md5 -password ndmpuser
Creating new user ndmpuser
User ID: 1500
Group ID: 1000
Home directory:
Changing password for user ndmpuser
New passwd:
Retype new passwd:
server_2: done
6- To verify the creation of the user, use the command:
# /nas/sbin/server_user <datamovername> –list
Sample output:
[root@ns20 ]# /nas/sbin/server_user server_2 -list
server_2:
vmware:GHzAATWb./tY.:555:555:Pcay8GerawFy7mUJvrVQZQt1Bk::ndmp_md5
vmware1:cdV184WodGNwU:666:666:zzRKDH1eraWowwm2kvjYtpU0VP::ndmp_md5
vmware2:EFTyKj2PNP9SE:777:777:96MtAH2ieUr18awmjvBblbtJaD::ndmp_md5
vmware5:BCpQj5sQBgqw2:6667:6667:CWdlvH5n7berZawmavfIjp5tYu::ndmp_md5
vmware6:JKaaz4rkcH06k:569:569:K46B1H6exrNxAawmvG1puKQQ58::ndmp_md5
vmware7:uuvlSUoHvdtXY:787:787:zomDIH7ipeErafwmzvqez9Du3H::ndmp_md5
vmware8:DEMoRK7mbcva.:789:789:CrJ1wH83Qer3awFBmv9ikgW39v::ndmp_md5
pmaes01:TUCu.VT.VhQAg:966:966:PpJtbGcvQmecEmhe5vXHYIpTRy::ndmp_md5
CK2000804008640000_CK2000804008640000:56vISRckWDg4U:65534:65534:Z6sl6Jresularocol6N8AOD5Zj::ndmp_md5
ndmpuser:34nqxPkxLiSBs:1500:1000:6anpcIreJasuPMD1MNQfmwLCuA::ndmp_md5By this the NDMP configuration is done!
End of Part 2…. to be continued on Part 3.