Proyecto de clase Sistemas OperativosDescripción completa
Descripción completa
Full description
Descripción: adads
Descripción: Easa Part 66 Mod.10
ASNT NDT Basic part 10
Guide to Modular Coordination in Building
Comparativa de SO
uvm
Sun Solaris 10 Operating System
Page 146
System Messaging Messaging /etc/syslog.conf file is responsible responsible for sending or redirecting the messages messages to the log file or console or user or loghost. Note: 1. By default every system will be its own loghost 2. Before going to do any configuration, make sure whether the packages related to the services are completely installed along with its dependencies. 3. As precaution precaution have the backup backup of the t he default configuration file. 1. Daemon 2. User process 3. Kernel 4. logger (this is the only command to generated the messages which is used to check out the configuration performed on the file /etc/syslog.conf) These above four persons can generate the messages to files or loghost or to user or to the console. Level of messages: emerg - 0 (first priority) = Panic conditions that would normally be broadcasted to all users. alert - 1 = Conditions that should be corrected immediately, such as a corrupted system database. crit - 2 = Warnings about critical conditions, such as hard device errors err - 3 = Other errors warning - 4 = Warning messages notice - 5 = Conditions that are not error conditions but that migght require special handling, such as failed login attempt. A failed login attempt is considered a notice and not an error. info - 6 = Information messages debug - 7 = Messages that are normally used only when debugging a program none - 8 = Does not send messages from the inidcated facility to the selected file, NOTE: When we specify a syslog level, it means means that the specified level and all higher levels. For eg, if we specify err level, then it includes crit, alert and emerg level too.
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 147
To compile: # /usr/ccs/bin/m4 /etc/syslog.conf Solaris-10: # svcadm enable system/system-log Starts the syslogd daemon # svcadm disable system/system-log Stops the syslogd deamon # svcadm refresh system/system-log Makes the Operating System to re-read the configuratiton file /etc/syslog.conf This command is must if the changes is done to the configuration file.
Solaris-9: # /etc/init.d/syslog /etc/init.d/syslog start # /etc/init.d/syslog /etc/init.d/syslog stop Output:(With default entry to the file) # Copyright (c) 1991-1998 by Sun Microsystems, Inc. # All rights reserved. # # syslog configuration file. # # This file is processed by m4 so be careful to quote (`') names # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. # *.err;kern.notice;auth.notice /dev/sysmsg *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages
*.alert;kern.err;daemon.err *.alert *.emerg
operator root *
# if a non-loghost machine chooses to have authentication messages # sent to the loghost machine, un-comment out the following line: #auth.notice ifdef(`LOGHOST', /var/log/authlog, @fire1) mail.debug
ifdef(`LOGHOST', /var/log/syslog, @fire1)
# # non-loghost machines will use the following lines to cause "user" # log messages to be logged locally. # ifdef(`LOGHOST', , Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 148
user.err user.err user.alert user.emerg )
/dev/sysmsg /var/adm/messages `root, operator' *
Eg entry from the file /etc/syslog.conf *.err;kern.debug;daemon.notice;mail.crit A B C D E
/var/adm/messages
where A = *.err means, all (user process, kernel, daemon, logger) who ever generating the error message B = kern.debug means, only kernel generating debug messages C = daemon.notice means, only deamon generating notice messages D = mail.crit means, only mail generating critical messages E = /var/adm/messages all above mentioned messages have to be logged to the file /var/adm/messages # tail -f /var/adm/messages will display all the messges generated by all. Output-truncated: bash-3.00# tail -f /var/adm/messages Sep 3 14:12:40 sunfire2 genunix: [ID 935449 kern.info] disabled. Control with "atapi-cd-dma-enabled" property Sep 3 14:12:40 sunfire2 genunix: [ID 882269 kern.info] selected Sep 3 14:12:40 sunfire2 genunix: [ID 935449 kern.info] disabled. Control with "atapi-cd-dma-enabled" property Sep 3 14:12:40 sunfire2 genunix: [ID 882269 kern.info] selected Sep 3 14:12:40 sunfire2 genunix: [ID 773945 kern.info] selected
ATA DMA off: PIO mode 4 ATA DMA off: PIO mode 4 UltraDMA mode 6
Note: Option: -f = option along with tail command refresh the file and keep on display the contents to the user. Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 149
To test: 1. edit the file /etc/syslog.conf /etc/syslog.conf *.notice /var/log/logs-test 2. Save the file 3. Create a empty file under /var/log # touch /var/log/logs-test 4. Refresh the system-log # svcadm refresh system-log 5. To test the configuration eg: # logger -p local0.notice Notice:level "test message" eg: # logger -p local0.notice Crit:level "test message" Note: If the same message is generated for several times, the same message will not be logged to the specified file. Now customizing the file /etc/syslog.conf Option-1: By editing the above file, with the following line *.err;kern.debug;daemon.notice;mail.crit
/var/adm/test_log
By this entry, we understand that, the log will be sent to the file /var/adm/test_log Note: 1. Make sure that the file /var/adm/test_log is esisting. 2. Compile the file 3. Refresh the service
Option-2: By editing the the above file, with the following following line *.err;kern.debug;daemon.notice;mail.crit
*
By thins entry, the messages will be sent to all the user's who are currently currently logged in
Option-3: By editing the above file, with the following line *.err;kern.debug;daemon.notice;mail.crit
che
By this entry, the messages will be sent only to the user che (as specified)
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 150
Output: Example entry of the file /var/adm/messages Sep 11 04:08:52 sunfire3 in.routed[185]: [ID 300549 daemon.warning] interface nge0 to A B C D E 192.168.0.200 restored G H Here A = Date & time when the message is generated B = System name (here, local system name) C = Process name/PID number D = Message ID, facility.level informations E = Incoming request (here through the interface nge0) F = PPID number ( NOTE: NOTE: NOT seen in the above output line) G = IP address H = Port number ( NOTE: NOT seen in the above output line) 0 - Used to debug the configuration file - This command reads the configuration file - NOTE: Only root user can run this command in Multi-user mode Output Ou tput truncated: bash-3.00# /usr/sbin/syslogd -d main(1): Started at time Fri Sep 11 04:19:46 2009 hnc_init(1): hostname cache configured 2037 entry ttl:1200 getnets(1): found 1 addresses, they are: 0.0.0.0.2.2 amiloghost(1): testing 192.168.0.100.2.2 cfline(1): (*.err;kern.notice;aut (*.err;kern.notice;auth.notice h.notice /dev/sysmsg) cfline(1): (*.err;kern.debug;daem (*.err;kern.debug;daemon.notice;mail.crit on.notice;mail.crit /var/adm/messages) cfline(1): (*.err;kern.debug;daem (*.err;kern.debug;daemon.notice;mail.crit on.notice;mail.crit /Desktop/log_file) cfline(1): (*.err;kern.debug;daem (*.err;kern.debug;daemon.notice;mail.crit on.notice;mail.crit /Desktop/log_test) cfline(1): (*.err;kern.debug;daem (*.err;kern.debug;daemon.notice;mail.crit on.notice;mail.crit india ) cfline(1): (*.err;kern.debug;daem (*.err;kern.debug;daemon.notice;mail.crit on.notice;mail.crit /dev/console ) logerror(1): syslogd: /dev/console : No such file or directory logerror_to_console(1): logerror_to_console(1) : syslogd: /dev/console : No such file or directory cfline(1): (*.alert;kern.err;daem (*.alert;kern.err;daemon.err on.err operator) cfline(1): (*.alert root)
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 151
RBAC - Role Based Acess Control: RBAC is an alternative method to assign special privilidge to a non-root user as an authorization or as role or as profile. Note: In Linux the same implementation is said to be us SUDO. Configuration Configuration files: /etc/user_attr: - Extended user attributes Database - Associates users and roles with authrizations and profiles NOTE: When creating a new user account with no rights profiles, authorizations or roles, nothing is added to the file. /etc/security/auth_attr: - Authrization attributes Database - Defines authorizations and their attributes and identifies the associated help file /etc/security/prof_attr: - Rights profile attributes database - Defines profiles, lists the profile's assigned authorizations, and identifies the associated help /etc/security/exec_attr: - Profile attributes database - Defines the privileged operations assinged to a profile Roles: - Will have an entry to the file /etc/passwd and /etc/shadow - Similar to user account - Collection of profiles Profiles: - Will have a dedicated shell - Profile shells will assingned assingned - Bourne Shell & Kron shell have profile shells - pfsh (bourne profile shell), pfksh (korn profile shell) - Is collection of numbner of commands. NOTE: 1. If the user/role change from the specified profile shell then they are not permitted to execute the authorized commands 2. It's not possible to login to the system directly using role. A role can only be used by switching the user to the role with "su" command. 3. We can also set up the "root" user as a role through a manaul process. This approach prevents users from logging in directory as the root user. Therefore, they must login as themselves first, and then use the su command to assume the role.
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 152
We can perform RBAC by three ways to an user: 1. Directly adding the authorization to the user account 2. Creating a profile, and adding the profile to the user account 3. Creating a profile, adding it to role, then adding the role to the user account. 4. Adding authorization to role and adding the role to an user I. Adding authorization to an user account: # useradd -m -d /export/home/shyam -s /usr/bin/pfsh \ -A solaris.admin.usermgr.pswd \ solaris.system.shutdown \ solaris.system.admin.fsmgr.write shyam # passwd shyam Here, we had added the existing authorization to the user account using -A option with #useradd command Note: The shell assinged is profile shell. Output: bash-3.00# su – shyam sunfire1% echo $SHELL /usr/bin/pfsh sunfire1% auths solaris.admin.usermgr.pswd,solaris.system.shutdown,solaris.admin.fsmgr.write,sol aris.device.cdrw,solaris.profmgr.read,solaris.jobs.users,solaris.mail.mailq,sola ris.admin.usermgr.read,solaris.admin.logsvc.read,solaris.admin.fsmgr.read,solari s.admin.serialmgr.read,solaris.admin.diskmgr.read,solaris.admin.procmgr.user,sol aris.compsys.read,solaris.admin.printer.read,solaris.admin.prodreg.read,solaris. admin.dcmgr.read,solaris.snmp.read,solaris.project.read,solaris.admin.patchmgr.r ead,solaris.network.hosts.read,solaris.admin.volmgr.read sunfire1% profiles Basic Solaris User All sunfire1% profiles -l All: * sunfire1% roles No roles
# roles - Returns the information about, to which roles the user is authorized to login # profiles - Returns the information about, to which profile the user is authorized authorized to execute
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 153
# profiles -l - Retuns the detailed information about the permitted commands that can be executed by the user # auths - Returns the information about the permitted autorization mapped to the user account. When a user is created with addittional information like authrization, profiles or roles, # useradd command update the entry to the file /etc/user_attr Output: Ou tput: (Relevant (Relevant t o the topic) prabhu::::type=normal;auths=solaris.admin.usermgr.pswd,solaris.system.shutdown,s olaris.admin.fsmgr.write
Note: We cannot see an entry to the file for a normal user.
II. Creating a profile and adding it to an user account: WTD: 1. Determine the name of the profile 2. Determine what commands has to be added to the profile 3. Edit the file /etc/security/prof_attr file accodingly 4. Edit the file /etc/security/exec_attr /etc/security/exec_attr file by providing the list of commands to the profile 5. Map the profile to the user HTD: Eample-1: Profile name=testprofile Commands added to the profile=shutdown,format,useradd,passwd Step-1: Adding/Creating a profile # vi /etc/security/prof_attr /etc/security/prof_attr testprofile:::This testprofile:::This is a test profile to test RBAC 1 2 Here, 1 = Name of the profile 2 = Comment about the profile (Optional)
Step-2: Mapping the list of commands to the created profile # vi /etc/security/exec_a /etc/security/exec_attr ttr testprofile:suser:cmd:::/usr/sbin/shutdown:uid=0 testprofile:suser:cmd:::/usr/sbin/format:uid=0 testprofile:suser:cmd:::/usr/sbin/useradd:uid=0 testprofile:suser:cmd:::/usr/bin/passwd:uid=0 Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 154
Step-3: Mapping the profile to the user account # useradd -m -d /export/home/accel -s /usr/bin/pfksh -P testprofile accel Here we have added the profile named "testprofile" "testprofile" to the user. Output: bash-3.00# su - accel sunfire1% echo $SHELL /usr/bin/pfksh sunfire1% roles No roles sunfire1% profiles testprofile Basic Solaris User All sunfire1% profiles -l testprofile: /usr/sbin/shutdown uid=0 /usr/sbin/format uid=0 /usr/sbin/useradd uid=0 /usr/bin/passwd uid=0 All: *
Example-2 Profile name: complete List of commands added: Creating a profile with all root privilidges Step-1: Step-1: Adding/Creating a profile # vi /etc/security/prof_attr /etc/security/prof_attr complete:::This complete:::This is to test the duplication of root profile 1 2 Here, 1 = Name of the profile 2 = Comment about the profile (Optional)
Step-2: Mapping the list of commands to the created profile # vi /etc/security/exec_a /etc/security/exec_attr ttr complete:suser:cmd:::*:uid=0 Step-3: Mapping the user to the profile # useradd -m -d /export/home/aita -s /usr/bin/pfsh -P complete aita
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 155
Output: bash-3.00# su - aita sunfire1# echo $USER root sunfire1# roles No roles sunfire1# profiles Web Console Management All Basic Solaris User sunfire1# profiles -l | more Web Console Management: /usr/share/webconsole/private/bin/smcwebstart
uid=noaccess, gid=noaccess, privs=proc_audit
All: *
Note: 1. The output of the commands # profiles # profiles -l will be similar for the root user. 2. From the above output, we can also observe the change in the shell of the user. Normally for the user the shell is $, but since the all the privilidge is given to the user, the shell is # III. Creating a role, profile and mapping it to the user account. WTD: 1. Determine the name of the user 2. Create the role 3. Assign the password to the role Note: a. Role should have a password to it. b. Without having a password it's not possible to login to that role 4. Create a profile 5. Add the list of commands to the profile 6. Add the profile to the role 7. Add the role to the user Note: This method has some more layer of security by assiging a password to a role.
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 156
HTD: Step-1: Create a role # roleadd -m -d /export/home/policy -s /usr/bin/pfsh policy 1. This command will update the following files a. /etc/passwd b. /etc/shadow c. /etc/user_attr /etc/user_attr Output: bash-3.00# roleadd -m -d /export/home/policy -s /usr/bin/pfsh policy 80 blocks bash-3.00# passwd policy New Password: Re-enter new Password: passwd: password successfully changed for policy bash-3.00# grep policy /etc/passwd policy:x:112:1::/export/home/policy:/usr/bin/pfsh bash-3.00# grep policy /etc/shadow policy:xXuxPLl/Wt13Q:14512:::::: bash-3.00# grep policy /etc/user_attr policy::::type=role;profiles=All
Step-2: Creating a profile Note: To create a profile pls do refer II Creating a profile. Let's make use of the above above existing profile. For eg, let's take the t he profile "testprofile" Step-3: Adding the profile to the role # rolemod rolemod -P testprofile,All policy Adds the profile named named "testprofile" to the existing role "quality". Now we can observe the changes to the file /etc/user_attr Output: quality::::type=normal;roles=complete;auths=solaris.admin.usermgr.pswd, solaris.system.shutdown,solaris.admin.fsmgr.write Step-4: Mapping the role to the user: # useradd -m -d /export/home/nokia -R policy -s /bin/bash nokia Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 157
Adding a role to the user. Output: bash-3.00# useradd -m -d /export/home/nokia -R policy -s /bin/bash nokia 80 blocks bash-3.00# passwd nokia New Password: Re-enter new Password: passwd: password successfully changed for nokia bash-3.00# su – nokia sunfire1% auths solaris.device.cdrw,solaris.profmgr.read,solaris.jobs.users,solaris.mail.mailq,s olaris.admin.usermgr.read,solaris.admin.logsvc.read,solaris.admin.fsmgr.read,sol aris.admin.serialmgr.read,solaris.admin.diskmgr.read,solaris.admin.procmgr.user, solaris.compsys.read,solaris.admin.printer.read,solaris.admin.prodreg.read,solar is.admin.dcmgr.read,solaris.snmp.read,solaris.project.read,solaris.admin.patchmg r.read,solaris.network.hosts.read,solaris.admin.volmgr.read sunfire1% profiles Basic Solaris User All sunfire1% profiles -l All: * sunfire1% roles policy sunfire1% su policy Password: sunfire1% profiles testprofile All Basic Solaris User sunfire1% profiles -l testprofile: /usr/sbin/shutdown uid=0 /usr/sbin/format uid=0 /usr/sbin/useradd uid=0 /usr/bin/passwd uid=0 All: *
Note: Authorized acitivity can be performed by the user, only after switch to the role. Role user account CANNOT be directly logged into the system.
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 158
Output: bash-3.00# su – nokia sunfire1% su policy Password: $ /usr/sbin/shutdown -g 180 -i 5 Shutdown started.
Fri Sep 25 17:26:01 IST 2009
Broadcast Message from root (pts/3) on sunfire1 Fri Sep 25 17:26:01... The system sunfire1 will be shut down in 3 minutes
Note: Default auths is assigned to an used is defined in the file /etc/security/policy. /etc/security/policy.conf conf bash-3.00# grep -i auths /etc/security/policy.conf AUTHS_GRANTED=solaris.device.cdrw
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 159
NAMING SERVICE NIS -> -> Net Net work Information Sy stem used to store the centralized user administration works with same environment LAN NIS has 3 components Includes a. NIS Master b. NIS Slave c. NIS Client NIS Master Master Server: 1. The first system to be prepared in the domain 2. Has the source file 3. Has NIS maps which are built from the source files 4. Provides single point of control 5. Only one master server for a domain 6. Daemons (Runs on NIS Master server): a. ypserv b. ypbind c. ypxfrd d. rpc.yppasswd e. rpc.ypdated NIS Slave Slave 1. 2. 3. 4. 5. 6.
Server: An optional system in the domain Does’nt have source files for that domain But has maps which are received from the master server Provides load balancing when the master server is busy Provides redundancy when the master server fails Deamons (Runs on NIS Slaver server): a. Ypserve b. ypbind
NIS Client: 1. Does’nt have source files and maps 2. Binds to the slaver server dynamically when the master sercer is either busy or down. 3. Deamons (Runs on NIS Client): a. ypbind DNS -> -> Domain Name Sy stem ste m WAN LDAP -> Light Weight Directory Access Protocol works with other environments too. With ref to the diagram. diagram. (1) -> whenever the user is trying to login to the system by issusing the user login name and password system first go and check the enties to the file /etc/nsswitch.conf. nsswitch.conf -> Name Servcie SWITCH CONFiguration file. it'll inform the system that the way how the user logins has to be searched. For eg: first with nis server, then with local files and so on. Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 160
We are provided with number of templates for the naming service. nsswitch.nis nsswitch.nisplus nsswitch.dns nsswitch.ldap (2) -> after reading the entry of the file, it moves and reads the file /etc/hosts (3) -> now it'll reaches the NIS server (4) -> reads the database of the NIS server to permit the user logins. /etc/passwd, /etc/shadow and some files will be checked and if the issued login name exists in the database of the NIS server. The system responds +ively. (5) And it'll be redirected to the client to authenticante authenticante the login. NIS Server Configuration Configuration Steps: 1. # cp /etc/nsswitch.nis /etc/nsswitch.conf 2. # domainname aita.com 3. # domainname > /etc/defaultdomain 4. # cd /etc 5. # touch ethers bootparams locale timezone netgroup netmasks 6. # ypinit -m 7. # /usr/lib/netsvc/yp/ypstart # /usr/lib/netsvc/yp/ypstop 8. # ypcat hosts 9. # ypcat user5 passwd 10. # ypwhich ypwhich TO CONFIGURE CONFIGURE NIS NI S CLIENT: 1. # cp /etc/nsswitch.nis /etc/nsswitch.conf 2. # domainname accel.com 3. # domainname > /etc/defaultdomain 4. # ypinit -c 5. # /usr/lib/netsvc/yp/ypstart 6. # ypwhich -m
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 161
TO UPDATE THE USER ADDITION: 1. # cd /var/yp 2. # /usr/ccs/bin/make # ypwhich will display the name of the NIS Master server # ypmatch -k passwd # ypmatch -k shivan passwd will display the entry for user "shivan" in the passwd database # ypcat hosts will display the hosts database # ypinit -m to initate the NIS Master server # ypinit -c to intiate it as a client when prompted for list of servers, provides the server name. NIS search status codes: SUCCESS - requested entry was found UNAVAIL - source was unavailable NOTFOUND - source contains no such entry TRYAGAIN - source returned an "I'm busy, try later' message ACTIONS: Continue - try the next source Return - stop looking for an entry Default Actions: SUCCESS = return UNAVAIL = continue NOTFOUND = continue TRAGAIN = continue Note: NOTFOUND = return The next source in the list li st will only be searched searched if NIS is down or has been disabled Normally, a success indicated that the search is over and an unsuccessful result indicates that the next source should be queried. There are occassions, however when you want to stop searching when an unsuccessful search result is returned. Information handled by a name service includes, 1. system (host)names and address 2. User names 3. Passwords 4. Groups 5. Automounter configuration files Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 162
(auto.master, auto.home) 6. Access permisssions & RBAC database files NOTE: YP to NIS, 1. NIS was formerly known as SUN Yellow Pages (YP). The functionality of the second NIS remains the same, only the same has changed. 2. NIS administration database are called MAPS. 3. NIS stored information about workstation names and addresses, users, the network itself, and network services. This collection of network information is reffered to as the NIS NAMESPACE. 4. Any system can be an NIS client, but only system with disks should be NIS servers, whether master or slave. 5. Servers are also clients of themselves. 6. The master copies of the maps are located on the NIS master server, in the directory /var/yp/domain_name 7. Under directory each map is stored as 2 files. a. mapname.dir b. mapname.pag /etc/bootparams: 1. Contains the path names that clients need during startup: root, swap and possibly others. /etc/ethers: 1. Contains system names and ethernet addresses. The system name in the key - ethers.byname 2. Contains system names and etehnet addresses. The ethernet addresses is the key in the map ethers.byaddr /etc/netgroup: 1. netgroup - contains groupname, username and system name. The groupname is the key. 2. netgroup.byhost - contians the group name, user name and system name is the key. 3. netgroup.byuser - contains the group name, user name and system name. The username is the key. /etc/netmask: netmasks.byaddr - contains the network masks to be used with YP subnetting. The address is the key. /etc/timezone: timezone.byname - contians the default timezone database. The timezone name is the key. /etc/shadow - agening.byname /etc/security/auth_attr - authorization attributes for RBAC. contains the autorizzation description database, part of RBAC. /etc/auto_home - auto.home Automounter file for home directory. Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 163
/etc/auto_master - auto.master Master automounter map /etc/security/exec_attr Contains execution profiles, part of RBAC /etc/hosts hosts.byaddr hosts.byname /etc/group group.byid group.byname group.byaddr /etc/usre_attr contains the extended user attributes database, part of RBAC /etc/security/prof_attr Contains profile descriptions, part of RBAC /etc/passwd /etc/shadow passwd.byname passwd.byid map.key.pag and map.key.dir map - base name of the map (hosts, passwd and so on...) key - the map's sort key (byname, byaddr and so on...) pag - the map's data dir - an index to the *.pag file These above were some of the databases, files are reffered after activating NIS. Still some more files and directories are there. To co nstruct a NIS slave slave serve r: # ypinit -s To delete the NIS server configuration: configuration: 1. Replace the file # cp /etc/nsswitch.files /etc/nsswitch.conf 2. Remove the binding directory # cd /var/yp # rm -rf binding Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 164
Note: Make sure that the yp services are stopped. # /usr/lib/netsvc/yp/ypstop
"/etc/hosts" 8 lines, 171 characters bash-3.00# bash-3.00# ypinit -m In order for NIS to operate sucessfully, we have to construct a list of the NIS servers. Please continue to add the names for YP servers in order of preference, one per line. When you are done with the list, type a or a return on a line by itself. next host to add: sunfire1 next host to add: ^D The current list of yp servers looks like this: sunfire1 Is this correct?
[y/n: y]
y
Installing the YP database will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] OK, please remember to go back and redo manually whatever fails. If you don't, some part of the system (perhaps the yp itself) won't work. The yp domain directory is /var/yp/solaris.com There will be no further questions. The remainder of the procedure should take 5 to 10 minutes. Building /var/yp/solaris.com/ypservers... Running /var/yp /Makefile... updated passwd updated group updated hosts updated ipnodes updated ethers updated networks updated rpc updated services updated protocols updated netgroup updated bootparams updated publickey updated netid /usr/sbin/makedbm /etc/netmasks /var/yp/`domainname`/netmasks.byaddr; updated netmasks updated timezone updated auto.master updated auto.home updated ageing updated auth_attr updated exec_attr updated prof_attr updated user_attr updated audit_user sunfire1 has been set up as a yp master server with errors. Please remember to figure out what went wrong, and fix it. If there are running slave yp servers, run yppush now for any data bases which have been changed. If there are no running slaves, run ypinit on those hosts which are to be slave servers. Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Configuring Slave Slave Server: bash-3.00# hostname sunfire2 bash-3.00# ypinit -c In order for NIS to operate sucessfully, we have to construct a list of the NIS servers. Please continue to add the names for YP servers in order of preference, one per line. When you are done with the list, type a or a return on a line by itself. next host to add: sunfire1 next host to add: ^D The current list of yp servers looks like this: sunfire1 Is this correct? [y/n: y] bash-3.00# svcadm enable nis/client bash-3.00# svcs -a|grep nis disabled 9:49:38 svc:/network/rpc/nisp svc:/network/rpc/nisplus:default lus:default disabled 9:49:38 svc:/network/nis/serv svc:/network/nis/server:default er:default disabled 9:49:38 svc:/network/nis/pass svc:/network/nis/passwd:default wd:default disabled 9:49:38 svc:/network/nis/xfr svc:/network/nis/xfr:default :default disabled 9:49:39 svc:/network/nis/upda svc:/network/nis/update:default te:default online 10:28:28 svc:/network/nis/clie svc:/network/nis/client:default nt:default bash-3.00# ypinit -s sunfire1
Installing the YP database will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] OK, please remember to go back and redo manually whatever fails. If you don't, some part of the system (perhaps the yp itself) won't work. The yp domain directory is /var/yp/solaris.com There will be no further questions. The remainder of the procedure should take a few minutes, to copy the data bases from sunfire1. sunfire2's nis data base has been set up bash-3.00# svcs -a|grep nis disabled 9:49:38 svc:/network/rpc/nisp svc:/network/rpc/nisplus:default lus:default disabled 9:49:38 svc:/network/nis/passwd:default svc:/network/nis/pass wd:default disabled 9:49:38 svc:/network/nis/xfr: svc:/network/nis/xfr:default default disabled 9:49:39 svc:/network/nis/upda svc:/network/nis/update:default te:default online 10:28:28 svc:/network/nis/clie svc:/network/nis/client:default nt:default Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Configuring Client: bash-3.00# hostname sunfire3 bash-3.00# ypinit -c In order for NIS to operate sucessfully, we have to construct a list of the NIS servers. Please continue to add the names for YP servers in order of preference, one per line. When you are done with the list, type a or a return on a line by itself. next host to add: sunfire1 next host to add: sunfire2 next host to add: ^D The current list of yp servers looks like this: sunfire1 sunfire2 Is this correct? [y/n: y] bash-3.00# svcadm enable nis/client bash-3.00# svcs -a|grep nis disabled 9:46:37 svc:/network/rpc/nisp svc:/network/rpc/nisplus:default lus:default disabled 9:46:37 svc:/network/nis/serv svc:/network/nis/server:default er:default disabled 9:46:37 svc:/network/nis/pass svc:/network/nis/passwd:default wd:default disabled 9:46:37 svc:/network/nis/upda svc:/network/nis/update:default te:default disabled 9:46:37 svc:/network/nis/xfr: svc:/network/nis/xfr:default default online 10:33:11 svc:/network/nis/client:def svc:/network/nis/client:default ault bash-3.00# ypwhich sunfire1
Client side service: To create a automount facility for the user’s home directories on demand, through indirect mapping Edit the file /etc/auto_master #+auto_master . . . . /export/home home-indirect :wq! Now create a file /etc/home-indirect
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 168
Note: This file will NOT be present by default, has to be created (can be with any name, but make sure that the file name is entered to the file auto-master). Contents to t he file: # vi /etc/home-indirect /etc/home-indirect * sunfire1:/export/home/& :wq! Note: Here sunfire1 is the name of the NIS Master Server.
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 169
Jumpstart RARP - Reverse Address Resolution Protocol ARP - Address Resolution Protocol Custom Jumpstart: 1. Requires up-front work 2. The most efficient way to centralize and automate the operating system installation at large enterprise 3. A way to install groups of similar system automatically and indentically. Jumpstart: 1. Automatically install the Solaris software on SPARC based system just by inserting the Solaris CD and powering on the system. 2. For new sparc systems shipped from Sun Mircrosystems, this is the default method of installing the operating system. Commands: # ./setup_instal ./setup_install_server l_server Sets up an install server to provide the OS to the client during the jumpstart jumpstart installation. This command is also used to setup a boot only server when -b option is specified. # ./add_to_install_server A script that copies additional packages within a product tree on the Solaris 10 software and Solaris 10 languages CD's to the local disk on an existing install server. #./add_install_client A command that adds network installation information about a system to an install or boot server's /etc files so that the system can install over the network. # ./rm_install_client Removes jumpstart clients that were previously setup for network installation #./check Validates Validates the information information in the rules file. Components of Jumpstart server: 1. Boot & Client identification service: service: These services are provided by a networked boot server and provide the information that a jumpstart jumpstart client needs to boot using the network. 2. Installation services: These are provided by a networked install server, which provides an image of the Solaris OS environment the jumpstart client uses as its source of data to install. 3. Configuartion services: These are provided by networked configuration server and provide information that a jumpstart client uses to partition disks and create file systems, add/remove Solaris packages and perform other configuration task. The Boot Server: Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 170
1. Also called the Start-up server, is where the client system access the startup files. 2. When a client is first turned on, it does not have an OS installed or an IP address assigned, therfore, when the client is first started, the boot server provides this information. 3. The boot server running the RARP daemon, in.rarpd, looks up the Ehternet addresss in the /etc/ethers files, checks for corresponding name in its /etc/hosts file, and passes the IPaddress back to the client. Important files which boot server will lookup: /etc/ethers /etc/bootparams /etc/dfs/dfstab /etc/hosts /tftpboot /etc/ethers: 1. When the jumpstart client boots, it has no IP address; so it broadcasts its Ehternet address to the network using RARP. 2. Boot server receives this request and attempts to match the client's ethernet address with an entry in the local /etc/ethers /etc/ethers file. 3. If a match is found, the client name is matched to an entry in the /etc/hosts file. In response to the RARP request from the client, the boot server sends the IP addrss from the /etc/hosts file back to the client. The client continues the boot process using the assigned IP address. 4. An entry for the jumpstart client must be created by editing the /etc/ethers file or by using the add_install_client script. /etc/bootparams: 1. Contains entries that network clients use for booting. 2. Jumpstart clients retrive the information from this file by issusing requests to server running rpc.bootparamd program. /tftpboot: 1. When booting over the network, the jumpstart client's boot PROM makes a RARP request, and when it receives a reply the PROM broadcasts a TFTP request to fetch the inetboot file from any server that responds & executes it. The Install server: 1. The boot server and the install server typically the same system. 2. The install server is a networked system that provides Solaris 10 DVD/CD images from which we can install Solaris 10 on another system on the network. The Configuration server: 1. The server that contains a jumpstart configuration directory is called a configuration server. It is usually the same system as the install and boot server, although it can be completely different server. 2. The configuration directory on the configuration server should be owned by root and should have permissions set to 755 (by default). Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 171
3. The configuration directory has rules, rules.ok, class file, the check script and the optional begin and install scripts. Begin and Finish scripts: 1. A begin script is a user-defined BOURNE shell script, located in the Jumpstart configuration directory on the configuration server, specifies within the rules file, that performs tasks before the Solaris software software is installed on the system. 2. Output from the begin script goes to /var/sadm/system/logs/begin.log 3. Begin script should be owned by root with default permission. 4. Finish scripts /var/sadm/system/logs/finish.logs Procedure Procedure to initiate the jumpstart configuration: Installation Service: 1. Create a slice with at least 5 Gb of space for holding the OS image. Here, in our eg we had created a slice (c0d1s5) with 6 Gb. 2. Create the file system for the created slice. # newfs /dev/rdsk/c0d1s5 3. Create a mount point and mount the slice. # mkdir /jump_image # mount /dev/dsk/c0d1s5 /jump_image Note: The slice can also be mounted as permanently, by editing the file /etc/vfstab. 4. Now, mount the cdrom/dvd (OS) either mannauly or using volume management. # /etc/init.d/volmgt /etc/init.d/volmgt start 5. Move to the location # cd /cdrom/Solaris_10/Tools 6. Run the following script from that location. # ./setup_install_server /jump_image This command will do the following functions: a. Check for the mount point, /jump_image b. check for the available space c. Copy the OS image from the CD/DVD to the hard disk drive Identification Service: WTD? - What to do? 1. Create a dir /jump_image/config Note: It can be with any name. 2. Create a directory in name of the jumpstart client under the above created directory. /jump_image/config/client1 [Optional] Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 172
3. Cretae a file 'sysidcfg' Note: File name should be sysidcfg 4. Share the directory. HTD - How to do? 1. # mkdir /jump_image/config 2. # mkdir /jump_image/config/client1 3. # cd /jump_image/config/client1 4. # vi sysidcfg Edit the file with the following contents: contents: network_interface=Primary { hostname=client1 netmask=255.255.255.0 protocol-ipv6=no default_route=none } name_service=none system_locale=en_US timezone=Asia/Calcutta timeserver=localhost root_password= :wq! 5. # cat >> /etc/dfs/dfstab /etc/dfs/dfstab share -F nfs -o ro /jump_image/config/client1 crtl+d 6. # shareall shareall 7. # svcadm enable enable nfs/server 8. # share Only to check whether the resources are shared properly. Configuration server: How the installation proceeds in jumpstart clients Provides information about a. Installation type b. System type c. Disk partitions or file system d. Cluster selection e. Software package addition/deletion WTD: 1. Create a profile under /jump_image/config/client1 directory in any name. Note: Profile file file is also known as CLASS CLASS file. 2. Create rules file to choose the right profile for the client in the same directory. directory. e. Run the check script to get rules.ok file Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 173
HTD: 1. # vi /jump_image/config/client1/profile edit the file with the following contents: install_type system_type partitioning filesys filesys cluster package
:wq! Note: partitioning explicit It means the manual layout. package SUNWman delete will not install the package SUNWman In the case of X86 client fdisk all solaris all 2. # vi /jump_image/config/client1/rules edit the file with following contents #hostname hostname
client1 profile -
:wq! 3. # cd /cdrom/Solaris_10/Misc/jumpstart_sample 4. # cp check /jump_image/config/client1 Copy the file check from DVD to the above specified location 5. # cd /jump_image/config/client1 6. # ./check ./check It will verify the rules file. If the syntax is correct it creates the rules.ok file.
Boot server: 1. # vi /etc/ethers /etc/ethers edit the file with client's mac address and it's proposed hostname. for eg 8:0:20:a9:bc:36
client1
:wq! 2. # vi /etc/inet/hosts /etc/inet/hosts edit the file with proposed ipaddress and proposed hostname of the client. eg: Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
On client side at Sun Sun machine: OK boot net - install
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 175
Zone Administration Zone Types: 1. Global zone 2. Non-global zone Global Global zones: 1. Has 2 functions 2. Is both the default zone for the system and the zone used for system-wide administrative administrative control. 3. Is the only zone from which a non-global zone can be configured, installed, managed, or uninstalled. 4. Only global zone is bootable from the system hardware. 5. Administration of the system infrastructure, such as physical devices, routing, or dynamic reconfiguration, is ONLY possible in the global zone. 6. Contains a complete installation of the Solaris system software packages. 7. Provides a complete database containing information about all installed components. It also holds configuration information specific to the global zone only, such as the global zone hostname and the file system table. 8. Is the only zone that is aware of all devices and all file systems. 9. Always has the name global. Note: 1. Each zone is also given a unique numeric identifier, which is assigned by the system when the zone is booted. 2. The global zone is always mapped to zone id 0. 3. The system assigns non-zero IDz to non-global zones when they reboot. The number can change when the zone reboots. 4. Non-global zones: 1. Can also contain Solaris software packages shared from the global zone and additional installed software packages not shared from the global zone. 2. Is not aware of the existence of any other zones. It CANNOT install, manage or uninstall uninstall itself i tself or any other zones. Zone daemons: 1. Uses 2 daemons to control zone operation. a. zoneadm b. zsched Note: The zoneadm daemon is the primary process for managing the zone’s virtual platform. There is one zoneadm process running for each active (ready, running or shutting down) zone on the system. Unless the zoneadmd daemon is already running, it is automatically started by the zoneadm command. Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 176
Zoneadm: Responsible for: 1. Managing zone booting and shutting down 2. Allocating the zone ID and starting the zsched system process 3. Setting zone-wide resource control (rctl) 4. Preparing the zone’s devices as specified in the zone configuration 5. Plumbing virtual network interfaces 6. Mounting loopback and conventional file systems Zsched: Every active zone has associated kernel process, zsched. The zsched process enables the zones subsystem to keep track of per-zone kernel threads. Kernel threads doing work on behalf of the zone are owned by zsched. Zone file system: There are 2 models for installing root file systems in non-global zones. a. Sparse zone b. Whole root zone Sparse Sparse z one: 1. Installs minimal number of files from the global zone when a non-global zone is installed. 2. Only certain root packages are installed in the non-global zone. These include a subset of the required root packages that are normally installed in the global zone, as well as any additional root packaged that the global administrator might have selected. Note: Any files that need to be shared between a non-global zone and the global zone can be mounted as read-only loopback file systems. By default /lib, /usr, /platform and /sbin are mounted in this manner. Once a zone is installed it is no l onger dependent dependent on the global zone unless a file system is mounted using a loopback file system. A non-global zone CANNOT be a nfs server. Whole root z one; 1. All of the required and any selected optional Solaris packages are installed into the private private file systems of the zone. 2. Provides the maximum flexibility. 3. Advantages of this model include the capability for global zone administrators to customize their zones file system layout. Zone states: Undefined: The Undefined: The zone’s configuration has not been completed and committed to stable storage. This state also occurs when a zone’s configuration has been deleted. Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 177
Configured: Zone’s configuration is complete and committed to stable storage. However, those elements of the zone’s application environment that must be specified after initial boot are not yet present. Incomplete: This Incomplete: This is a transitional state. During an install or uninstall operation, zoneadm sets the state of the target zone to incomplete. Upon successful completion of the operation, the state is set to the correct state. However, a zone that is unable to complete the install process will stop in this state. Installed: During this state, the zones configuration is instantiated on the system. The zoneadm command is used to verify that the configuration can be successfully used on the designated Solaris system. Packages are installed under the zones root path. In this state, the zone has no associated virtual platform. Ready: In this state, the virtual vi rtual platform for the zone is established. The kernel created the zsched process, network interfaces are plumbed, file systems are mounted, and devices are configured. A unique zone ID is assigned by the system. At this stage, no processes associated with the zone have been started. Running: In this state. The user processes associated with the zone application environment are running. The zone enters the running state as soon as the first user process associated with the application environment is created. Shutting: Down and down- These states are transitional states that are visible while the zone is being halted. However, a zone that is unable to shut down for any reason will stop in one of these states. Allocating file system space: space: 1. About 100 Mb of disk space per non-global zone is required when the global zone has been installed with all of the standard Solaris packages. 2. By default, any additional packages installed in the global zone also populate the nonglobal zones. The amount of disk space required must be increased accordingly. The directory location in the non-global zone for these additional packages is specified through the inerhit-pkg-dir resource. 3. An additional 40 Mb of RAM per zones are suggested, but not required on a machine with sufficient swap space. Usage sage of # zonec fg co mmand: 1. 2. 3. 4. 5. 6.
Create or delete a zone configuration Set properties for resources added to a configuration Query or verify a configuration Commit to a configuration Revert to a previous configuration Exit from a zonecfg session.
Usaage of # zoneadm command; 1. Verify a zone’s configuration 2. Install a zone 3. Boot a zone Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
4. 5. 6. 7. 8.
Page 178
Reboot a zone Display information about a running zone Move a zone Uninstall a zone Remove a zone using the zonecfg command
In nut shell: Before configuring the zones: 1. 2. 3. 4.
5.
Create the zone using zonecfg -z (zonename) command [undefined state] Create the zone path dir manually and the permission should be 700 for that directory Configure the zone using zonecfg command[configured] Install a zone after configuration to change the state to installed[during installationincomplete] from configured Boot the zone after installing it[running state-before this state it goes to ready state where all the n/w interfaces are plumbed, file systems are mounted , devices are configured, unique zone id is assigned to the system].At this ready state no processes associated with this zone is started The state goes to running state where all the processes are started.
Output: Ou tput: Zone confi guration guration steps: steps: bash-3.00# zonecfg -z zones1 zones1: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:zones1> create zonecfg:zones1> set zonepath=/etc/zones/zonepractice zonecfg:zones1> set autoboot=true zonecfg:zones1> add fs zonecfg:zones1:fs> set dir=/mnt/zones zonecfg:zones1:fs> set special=c1t0d0s4 zonecfg:zones1:fs> set raw=/dev/rdsk/c1t0d0s4 zonecfg:zones1:fs> set type=ufs zonecfg:zones1:fs> end zonecfg:zones1> add net zonecfg:zones1:net> set physical=eri0 zonecfg:zones1:net> set address=10.2.3.5 zonecfg:zones1:net> end zonecfg:zones1> add attr zonecfg:zones1:attr> set name=zones zonecfg:zones1:attr> set type=string zonecfg:zones1:attr> set value=uint zonecfg:zones1:attr> end zonecfg:zones1> add inherit-pkg-dir zonecfg:zones1:inherit-pkg-dir> set dir=/opt/sfw zonecfg:zones1:inherit-pkg-dir> end zonecfg:zones1> add rctl zonecfg:zones1:rctl> set name=zone.cpu-shares zonecfg:zones1:rctl> add value(priv=privileged,limit=10,action=none) zonecfg:zones1:rctl> end zonecfg:zones1:verify zonecfg:zones1:commit zonecfg:zones1:exit Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Output: Ou tput: To know the c onfigured zone status: # zoneadm list -cp 0:global:running:/::native:shared -:zones1:configured:/etc/zones/zonepractice::native:shared
bash-3.00#zoneadm -z zones1 install
bash-3.00#zoneadm -z zones1 boot bash-3.00# zoneadm list -cp 0:global:running:/::native:shared 1:zones1:running:/etc/zones/zonepractice:f84ec383-bfe3-c890-8a7ff74970d40c96:native:shared bash-3.00# zlogin -C zones1 [Connected to zone 'zones1' console]
To halt a zone:
# zoneadm -z zones1 halt
To uninstall a zo ne: # zoneadm -z zones1 uninstall To delete a zone: # zonecfg -z zones1 delete
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 181
ZFS ZFS has been designed to be robust, scalable and simple to administer.
ZFS pool storage fe atures: atures: ZFS eliminates the volume managerment altogethe. altogethe. Instead of forcing us to create create virtualized virtuali zed volumes, ZFSaggregates devices into a storage pool. The storage pool pool describes the physical characteristics of the storage (device layout, data redundancy, and so on,) and acts as arbitrary data store from which the file systems can be created. File systems grow automatically within the space allocated to the storage pool. ZFS is a transactional file system, which means that the file system state is always consistent on disk. With a transactional file system, data is managed using coy on write semantics. ZFS supports storage pools with varying levels of data redundancy, including mirroring and a variation on RAID-5. When a bad data data block is detected, ZFS fetches the correct data from another replicated copy, and repairs the bad data, replacing it with the good copy. The file system itself if 128-bit, 128-bit, allowing for 256 quadrillion quadrillion zettabytes of storage. Directories ca have have upto 2 to the power of 48 (256 trillion) entires, and and no limit exists on the number of file systems or number of files that can be contained within a file system. A snapshot is a read-only copy of a file system or volume. Snapshots can be be created quickly and easily. easily . Intially, snapshots consume no additional space space within the pool. Clone – A file system whose initial contents are identical to the contents of a snapshot. ZFS component Naming requirements: Each ZFS component must be named according to the following rules; 1. Empty components are not allowed. 2. Each component can only contain alphanumeric characters in addition to the following 4 special characters: a. Undrerscore (_) b. Hypen (-) c. Colon (: ) d. Perion (.) 3. Pool names must begin with a letter, expect that the beginning sequence c(0-9) is not allowed (this is because of the physical naming convention). In addition, pool names that begin with mirror, raidz, or spare are not allowed as these name are reserved. 4. Dataset names must begin with an alphanumeric character. ZFS Hardwar Hardware e and Software requireme nts and recom mendations: 1. A SPARC or X86 system that is running the Solaris 10 6/06 release or later release. 2. The minimum disk size is 128 Mbytes. The minimum amount of disk space required for a storage pool is approximately 64 Mb. 3. The minimum amount of memory recommended to install a Solaris system is 512 Mb. However, for good ZFS performance, at least 1 Gb or more of memory is recommended. 4. Whilst creating a mirrored disk configuration, multiple contorllers are recommended.
DESTROY DESTROYING ING A POOL: bash-3.00# zpool destroy testmirrorpool bash-3.00# zpool list NAME SIZE USED AVAIL testpool 2G 100M 1.90G
CAP 4%
HEALTH ONLINE
ALTROOT -
MANAGING ZFS ZFS PROPERT IES: bash-3.00# zfs get all testpool/homedir NAME PROPERTY VALUE testpool/homedir type filesystem testpool/homedir creation Sat Nov 14 11:34 2009 testpool/homedir used 24.5K testpool/homedir available 4.89G testpool/homedir referenced 24.5K testpool/homedir compressratio 1.00x testpool/homedir mounted yes testpool/homedir quota none testpool/homedir reservation none testpool/homedir recordsize 128K testpool/homedir mountpoint /testpool/homedir testpool/homedir sharenfs off testpool/homedir checksum on testpool/homedir compression off testpool/homedir atime on testpool/homedir devices on testpool/homedir exec on testpool/homedir setuid on testpool/homedir readonly off testpool/homedir zoned off testpool/homedir snapdir hidden testpool/homedir aclmode groupmask testpool/homedir aclinherit secure
bash-3.00# zfs set quota=500m testpool/homedir bash-3.00# zfs set compression=on testpool/homedir bash-3.00# zfs set mounted=no testpool/homedir cannot set mounted property: read only property bash-3.00# zfs get all testpool/homedir NAME PROPERTY VALUE testpool/homedir type filesystem testpool/homedir creation Sat Nov 14 11:34 2009 testpool/homedir used 24.5K testpool/homedir available 500M testpool/homedir referenced 24.5K testpool/homedir compressratio 1.00x
INHERITING ZFS PROPERTIES: bash-3.00# zfs get -r compression testpool NAME PROPERTY VALUE testpool compression off testpool/homedir compression on testpool/homedir/nesteddir testpool/homedir/neste ddir compression on
SOURCE default local local
bash-3.00# zfs inherit compression testpool/homedir bash-3.00# zfs get -r compression testpool NAME PROPERTY VALUE testpool compression off
bash-3.00# zfs inherit -r compression testpool/homedir bash-3.00# zfs get -r compression testpool NAME PROPERTY VALUE testpool compression off
testpool/homedir compression off testpool/homedir/nesteddir testpool/homedir/neste ddir compression Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
SOURCE default
default off Manickam Kamalakkannan
default
Sun Solaris 10 Operating System
Page 186
QUERYING QUERYING ZFS PROPERT IES: bash-3.00# zfs get checksum testpool/homedir NAME PROPERTY VALUE testpool/homedir checksum on
SOURCE default
bash-3.00# zfs get all testpool/homedir NAME PROPERTY VALUE testpool/homedir type filesystem testpool/homedir creation Sat Nov 14 11:34 2009 testpool/homedir used 50K testpool/homedir available 500M testpool/homedir referenced 25.5K testpool/homedir compressratio 1.00x testpool/homedir mounted yes testpool/homedir quota 500M testpool/homedir reservation none testpool/homedir recordsize 128K testpool/homedir mountpoint /testpool/homedir testpool/homedir sharenfs off testpool/homedir checksum on testpool/homedir compression off testpool/homedir atime on testpool/homedir devices on testpool/homedir exec on testpool/homedir setuid on testpool/homedir readonly off testpool/homedir zoned off testpool/homedir snapdir hidden testpool/homedir aclmode groupmask testpool/homedir aclinherit secure
bash-3.00# zfs get -s local all testpool/homedir NAME PROPERTY VALUE testpool/homedir quota 500M
SOURCE local
RAI D-Z D-Z POOL: bash-3.00# zpool create testraid5pool raidz c2d0s3 c2d0s4 c2d0s5 bash-3.00# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT testpool 2G 100M 1.90G 4% ONLINE testraid5pool 14.9G 89K 14.9G 0% ONLINE bash-3.00# df -h Filesystem size used avail capacity Mounted on /dev/dsk/c1d0s0 20G 10G 9.1G 54% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 3.1G 736K 3.1G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object /usr/lib/libc/libc_hwcap2.so.1 20G 10G 9.1G 54% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 3.1G 48K 3.1G 1% /tmp Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 187
swap testpool testpool/homedir
3.1G 2.0G 2.0G
32K 25K 100M
3.1G 1.9G 1.9G
1% 1% 5%
/var/run /testpool /testpool/homedir
testraid5pool
9.8G
32K
9.8G
1%
/testraid5pool
DOUBLE DOUBLE PARIT Y R AID-Z POOL: POOL: bash-3.00# zpool create doubleparityraid5pool raidz2 c2d0s3 c2d0s4 c2d0s5 bash-3.00# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT
DRY R UN OF STORAGE POOL CREATION: bash-3.00# zpool create -n testmirrorpool mirror c2d0s3 c2d0s4 would create 'testmirrorpool' with the following layout: testmirrorpool mirror c2d0s3 c2d0s4 bash-3.00# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT testpool 2G 100M 1.90G 4% ONLINE bash-3.00# df / (/dev/dsk/c1d0s0 ):19485132 blocks 2318425 files /devices (/devices ): 0 blocks 0 files /system/contract (ctfs ): 0 blocks 2147483612 files /proc (proc ): 0 blocks 16285 files /etc/mnttab (mnttab ): 0 blocks 0 files /etc/svc/volatile (swap ): 6598720 blocks 293280 files /system/object (objfs ): 0 blocks 2147483444 files /lib/libc.so.1 (/usr/lib/libc/libc_hwcap2.so.1):19485132 (/usr/lib/libc/libc_hw cap2.so.1):19485132 blocks 2318425 files /dev/fd (fd ): 0 blocks 0 files /tmp (swap ): 6598720 blocks 293280 files /var/run (swap ): 6598720 blocks 293280 files Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
/testpool /testpool/homedir
Page 188
(testpool (testpool/homedir
): 3923694 blocks ): 3923694 blocks
3923694 files 3923694 files
Note: Here the –n option is used not to create a zpool but just to check if it is possible to create it or not. If it is possible, it’ll give the above output, else it’ll give the error which is expected to occur when creating the zpool.
LISTING LIST ING THE POOLS AND ZFS: bash-3.00# zpool list NAME testmirrorpool testpool
SIZE 4.97G 2G
USED 52.5K 100M
AVAIL 4.97G 1.90G
CAP 0% 4%
HEALTH ONLINE ONLINE
bash-3.00# zpool list -o name,size,health NAME SIZE HEALTH testmirrorpool 4.97G ONLINE testpool 2G ONLINE bash-3.00# zpool status -x all pools are healthy bash-3.00# zpool status -x testmirrorpool pool 'testmirrorpool' is healthy bash-3.00# zpool status -v pool: testmirrorpool state: ONLINE scrub: none requested config: NAME testmirrorpool mirror c2d0s3 c2d0s4
STATE ONLINE ONLINE ONLINE ONLINE
READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0
errors: No known data errors pool: testpool state: ONLINE scrub: none requested config: NAME testpool c2d0s7
/testmirrorpool /testpool 24.5K /testpool/homedir_old /testpool/homedir_ol d
bash-3.00# zfs list -o name,sharenfs,mountpoint NAME SHARENFS MOUNTPOINT testmirrorpool off /testmirrorpool testpool off /testpool testpool/homedir_old off /testpool/homedir_old bash-3.00# zfs create testpool/homedir_old/nesteddir bash-3.00# zfs list testpool/homedir_old NAME USED AVAIL REFER MOUNTPOINT testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old bash-3.00# zfs list -r testpool/homedir_old NAME USED AVAIL REFER MOUNTPOINT testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old testpool/homedir_old/nesteddir testpool/homedir_old/n esteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir bash-3.00# zfs get -r compression testpool NAME PROPERTY VALUE SOURCE testpool compression off default testpool/homedir compression off default testpool/homedir/nesteddir testpool/homedir/neste ddir compression off bash-3.00# zfs set compression=on testpool/homedir/nesteddir bash-3.00# zfs get -r compression testpool NAME PROPERTY VALUE SOURCE testpool compression off default testpool/homedir compression off default
bash-3.00# zfs create testpool/homedir_old/nesteddir bash-3.00# zfs list testpool/homedir_old NAME USED AVAIL REFER MOUNTPOINT testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old bash-3.00# zfs list -r testpool/homedir_old NAME USED AVAIL REFER MOUNTPOINT testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir
MOUNTING AND UNMOUNTING ZFS FILESYSTEMS: bash-3.00# zfs get mountpoint testpool/homedir NAME PROPERTY VALUE testpool/homedir mountpoint /testpool/homedir
SOURCE default
bash-3.00# zfs get mounted testpool/homedir NAME PROPERTY VALUE testpool/homedir mounted yes
SOURCE -
bash-3.00# zfs set mountpoint=/mnt/altloc testpool/homedir bash-3.00# zfs get mountpoint testpool/homedir NAME PROPERTY VALUE testpool/homedir mountpoint /mnt/altloc
SOURCE local
LEGACY MOUNT MOUNT P OINTS: Legacy filesystems must be managed through mount and umount commands and the /etc/vfstab file. Unlike normal zfs filesystems, zfs doesn't automatically mount legacy filesystems on boot. Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
MOUNTING MOUNTING ZFS FIL ESYSTEMS: ESYSTEMS: bash-3.00# umountall bash-3.00# zfs mount bash-3.00# zfs mount -a bash-3.00# zfs mount testpool/homedir testpool/homedir/nesteddir testpool
/mnt/altloc /mnt/altloc/nesteddir /testpool
Note: 1. zfs mount -a command doesn't mount legacy filesystems. 2. To force a mount on top of a non-empty directory, use the option -O 3. To specify the options like ro, rw use the option -o
UNMOUNTING ZFS FILESYSTEMS: bash-3.00# zfs mount testpool testpool/homedir testpool/homedir/nesteddir
Note: The sub command works both the ways - unmount,umount. This is to provide backwards compatibility.
ZFS WEB-B WEB-BASED ASED MANAGEMEN MA NAGEMENT: T: bash-3.00# /usr/sbin/smcwebserver start Starting Sun Java(TM) Web Console Version 3.0.2 ... The console is running bash-3.00# /usr/sbin/smcwebserver enable The enable sub command enables the server to run automatically when the system boots.
ZFS SNAPSHOTS: bash-3.00# zfs list -r NAME USED AVAIL REFER MOUNTPOINT testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool testpool 146K 1.97G 26.5K /testpool testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir bash-3.00# zfs snapshot testpool/homedir_old@snap1 bash-3.00# zfs list -t snapshot NAME USED AVAIL testpool/homedir_old@snap1 testpool/homedir_old@s nap1 0
bash-3.00# zfs get all testpool/homedir_old@snap1 NAME PROPERTY VALUE SOURCE testpool/homedir_old@snap1 testpool/homedir_old@s nap1 type snapshot testpool/homedir_old@snap1 testpool/homedir_old@s nap1 creation Fri Nov 13 16:26 2009 testpool/homedir_old@snap1 testpool/homedir_old@s nap1 used 0 testpool/homedir_old@snap1 testpool/homedir_old@s nap1 referenced 27.5K testpool/homedir_old@snap1 testpool/homedir_old@s nap1 compressratio 1.00x
-
PROPERT IES OF SNAPSHOTS: SNAPSHOTS: bash-3.00# zfs get all testpool/homedir_old@snap1 NAME PROPERTY VALUE SOURCE testpool/homedir_old@snap1 testpool/homedir_old@s nap1 type snapshot testpool/homedir_old@snap1 testpool/homedir_old@s nap1 creation Fri Nov 13 16:26 2009 testpool/homedir_old@snap1 testpool/homedir_old@s nap1 used 0 testpool/homedir_old@snap1 testpool/homedir_old@s nap1 referenced 27.5K testpool/homedir_old@snap1 testpool/homedir_old@s nap1 compressratio 1.00x bash-3.00# bash-3.00# zfs set compressratio=2.00x testpool/homedir_old@snap1 cannot set compressratio property: read only property bash-3.00# zfs set compression=on testpool/homedir_old@snap1 cannot set compression property for 'testpool/homedir_old@snap1': snapshot properties cannot be modified
RENAMING ZFS SNAPSHOTS: bash-3.00# zfs rename testpool/homedir_old@snap1 additionalpool/homedir@snap3 cannot rename to 'additionalpool/homedir@snap3': snapshots must be part of same dataset bash-3.00# zfs rename testpool/homedir_old@snap1 testpool/homedir_old@snap3 bash-3.00# zfs list -t snapshot NAME USED AVAIL
DISPLAYING AND ACCESSING ZFS SNAPSHOTS: bash-3.00# ls /testpool/homedir_old/.zfs/snapshot snap2 snap3 bash-3.00# zfs list -r -t snapshot -o name,creation testpool/homedir_old NAME CREATION testpool/homedir_old@snap3 testpool/homedir_old@s nap3 Fri Nov 13 16:26 2009 testpool/homedir_old@snap2 testpool/homedir_old@s nap2 Fri Nov 13 16:31 2009 testpool/homedir_old/nesteddir@snap2 testpool/homedir_old/n esteddir@snap2 Fri Nov 13 16:31 2009
ROLLI NG BACK BACK TO A ZFS SNAPSHOT: bash-3.00# zfs rollback testpool/homedir_old@snap3 cannot rollback to 'testpool/homedir_old@snap3': more recent snapshots exist Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 195
use '-r' to force deletion of the following snapshots: testpool/homedir_old@snap2 bash-3.00# zfs rollback -r testpool/homedir_old@snap3
DESTROY DESTROYING ING A ZFS SNAPSHOT: SNAPSHOT: bash-3.00# zfs destroy testpool/homedir_old@snap3 cannot destroy 'testpool/homedir_old@snap3': snapshot has dependent clones use '-R' to destroy the following datasets: testpool/additionaldir/testclone bash-3.00# zfs destroy -R testpool/homedir_old@snap3
bash-3.00# zfs get sharenfs,quota testpool/additionaldir/testclone NAME PROPERTY VALUE SOURCE testpool/additionaldir/testclone testpool/additionaldir /testclone sharenfs on local testpool/additionaldir/testclone testpool/additionaldir /testclone quota 500M local
REPLACING A ZFS FILESYSTEM FI LESYSTEM WITH A ZFS CLONE: CLONE: bash-3.00# zfs list -r testpool/homedir_old NAME USED AVAIL REFER MOUNTPOINT testpool/homedir_old 74.5K 1.97G 27.5K /testpool/homedir_old testpool/homedir_old@snap3 testpool/homedir_old@s nap3 22.5K - 27.5K testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir testpool/homedir_old/nesteddir@snap2 testpool/homedir_old/n esteddir@snap2 0 - 24.5K -
bash-3.00# zfs list -r testpool/additionaldir NAME USED AVAIL REFER MOUNTPOINT testpool/additionaldir 48K 1.97G 25.5K /testpool/additional /testpool/additionaldir dir testpool/additionaldir/testclone testpool/additionaldir /testclone 22.5K 500M 27.5K /testpool/additionaldir/testclone bash-3.00# zfs promote testpool/additionaldir/testclone bash-3.00# zfs list -r testpool/homedir_old NAME USED AVAIL REFER MOUNTPOINT testpool/homedir_old 47K 1.97G 27.5K /testpool/homedir_old testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir testpool/homedir_old/nesteddir@snap2 testpool/homedir_old/n esteddir@snap2 0 - 24.5K Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 197
bash-3.00# zfs list -r testpool/additionaldir NAME USED AVAIL REFER MOUNTPOINT testpool/additionaldir 75.5K 1.97G 25.5K /testpool/additionaldir testpool/additionaldir/testclone testpool/additionaldir /testclone 50K 500M 27.5K /testpool/additionaldir/testclone
Volume Manager Solaris Volume Manager/Solstice Manager/Solstice Disk Suit Suit e 4 .0 Advantages: Provides 3 major functionalities 1. Overcome the disk size limitation by providing for joining of multiple disk slices to form a bigger volume. 2. Fault tolerance by allowing mirroring of data from one disk to another and keeping parity information in Raid-5. 3. Performance enhancements by allowing spreading of the data. Disk suite packages: 1. Format of the package is datastream. 2. Packages can manually added by executing the command # pkgadd. 3. SUNWmd - Solstice Disk Suite 4. SUNWmdg - Solstice Disk Suite Tool 5. SUNWmdn - Solstice Disk Suite log daemon Terminalogy: A. Metadevice: 1. A virtual device composed of several physical devices-slices/disks. 2. Provide increased capacity, higher availability & better performance. 3. Standard metadevice name begins with "d" and is followed by a number. for eg: dnn d10 i. By default 128 unique metadevices in the range between 0 and 127 can be created. ii. Additional metadevices can be added by updating a file /kernel/drv/md.conf. But it may degrade the performance of the system. 4. Metadevice names are located in /dev/md/dsk and /dev/md/rdsk
B. State database/Meta database/md/Replica: 1. Provides non-volatile storage necessary to keep track of configuration & status information for all metadevices, meta mirrors. 2. Also keep track of error conditions that have occured. 3. When the state database is updated each replica is modified once at a time. 4. Needs a dedicated disk slice 5. Has to be created before the logical devices are created. 6. Minimum 3 databases have to be created. 7. N/2 replica is required for the running system. 8. N/2+1 replica is required, when the system reboots. 9. Size of 1 replica is 4Mb.
RAID-0: Concatenation and Stripping: 1. Joining of 2 or more disk slices to add up the disk space. 2. Serial in nature. ie., sequential data operation are performed serially on first disk then the second disk and so on... 3. Due to serial in nature new slices can be added up without having to take the backup of entire concatenated volume. 4. The address space is contigueous-datas will be stored volume by volume. 5. No fault toleran t olerance. ce. 6. Size of the volume = Sum of the all physicall components in that volume Note: We can use a concatenated/stripped metadevice for any file system with the exception of / (root), swap, /usr, /var, /opt or any file system accessed during a Solaris upgrade or install.
Stripping: 1. Spreading of data over multiple disk drives mainly to enhance the performance by distributing data. 2. Data is divided into equal sized chunks. By default 16kb. Chunks=interlace 3. Interlace value tells disk suite how much data is placed on a component before moving to the next component of the stripe. 4. Because the data is spread across a stripe, we gain increased performabce as read/write are spread across multiple disk. 5. Size of the volume = N * smallest size of the physical component in that volume 6. No falut tolerance Raid-1 Mirroring: 1. Write performance is slow. 2. Provides fault tolerance 3. Provides data retundancy by simultaneously writing data on to two sub-mirrors. Note: a. Meta mirror is a special type of meta device made up of one or more other meta devices. Each meta device within a meta mirror is called a sub-mirror. b. Meta mirror can be defined by using metainit. i. additional sub-mirrors can be added at later stage without bringing the system down or disrupting the read and write to existing meta mirror. ii. "metatach" used, to which attaches a sub-mirror to a meta mirror. Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 200
iii. When attached, all the data from another sub-mirror in the metamirror is automatically written to the newly attached sub-mirror. This is called "resyncing". iv. Once metattach is performed, the sub-mirror remain attached even when the system is rebooted.
Raid-5: 1. Provides fault tolerance 2. data retundancy 3. uses less space when compared with mirroring 4. data is divided into stripes and the parity is calculated from the data, then they are stored in such a manner parity is distributed or rotated. 5. Size of the volume = (N-1)*smallest physical volume
System files associated with the disk suite: 1. 3 system files 2. a. /etc/lvm/md.tab /etc/lv m/md.tab b. /etc/lvm/md.cf c. /etc/lvm/mddb.cf 3. a. /etc/lvm/md.tab i. used by metainit and metadb commands as a workspace ii. each meta device may have a unique entry iii. used only when creating metadevices, hot spares/database replicas iv. not automatically updated by disk suite utilities v. have little or no correspondence with actual meta devices, hot spare or replicas. vi. Input file used by metainit, metadb,metahs vii. The output from this file is similar to that displayed when # metastat -p viii. # metainit -a => update this file 3.b. 3.b. /etc /etc /lvm/md.cf i. automatically updated whenever the configuration is changed ii. basically a disaster recovery file and should never be edited. iii. the md.cf file does not get updated when hot sparing occurs. iv. should never be used blindly after a disaster. Be sure to examine the file first. Output: bash-3.00# cat /etc/lvm/md.cf # metadevice configuration file # do not hand edit d100 1 1 c1d0s4
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 201
3.c. 3.c. /etc /lvm/mddb.cf /lvm/mddb.cf i. Created whenever the 'metadb' command is run and is used by command 'metainit' to find locations of the meta device state database. database. ii. never edit this file iii. each meta device state database replica has a unique entry in the file. Output: bash-3.00# cat /etc/lvm/mddb.cf #metadevice database location file do not hand edit #driver minor_t daddr_t device id checksum cmdk 7 16 id1,cmdk@ASEAGATE_ST32500NSSUN250G_0732B4B31 id1,cmdk@ASEAGATE_ST3 2500NSSUN250G_0732B4B31T=5QE4B31T/h T=5QE4B31T/h -4269 cmdk 7 8208 id1,cmdk@ASEAGATE_ST32500NSSUN250G_0732B4B31 id1,cmdk@ASEAGATE_ST3 2500NSSUN250G_0732B4B31T=5QE4B31T/h T=5QE4B31T/h -12461 cmdk 7 16400 id1,cmdk@ASEAGATE_ST32500NSSUN250G_0732B4B31 id1,cmdk@ASEAGATE_ST3 2500NSSUN250G_0732B4B31T=5QE4B31T/h T=5QE4B31T/h -20653
4. /kernel/drv/md.conf /kernel/drv/md.conf a. used by the metadisk driver, when it is initially loaded. b. the only modified field in the file is "nmd" nmd = reprsesnts the number of meta devices supported by the driver c. If the filed is modified, perform reconfiguration boot to build meta devices. (OK boot -r) d. If "nmd" is lowered, any meta device existing between the old number and the new number MAY NOT PERSISTENT. e. default:128 default:128 f. supports upto 1024 g. if larger number of meta devices are added, performance degradation will happen. h. if larger metadevices are added, replica/state database has to be increased.
Hot spares: spares: 1. Disk suite's hot spare facility automatically replaces failed sub-mirror/Raid components, provided that a spare component is available and reserved. 2. Are temporary fixes, used until failed components are either repaired or replaced. Hot spare pool: 1. Is a collection of slices reserved by Disk suite to be automatically automatically substituted in case of a slice failure in either a sub-mirror or Raid-5 meta device. 2. May be allocated, relocated or reassigned at any time unless a slice in that hot spare pool is being used to replace damaged slice of its associated meta devices.
Operations performed on meta device: 1. mount the meta device in a directory 2. unmount the meta device 3. copy the files to the meta device 4. read and write the files from and to the meta device 5. ufsdump & ufsrestore the meta device
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 202
Commands used in svm: 1. # metarecover metarecover recover soft partition information. scans a specified component to look for soft partition configuration information & to regenerated the configuration. 2. # metareplace enable/replace components of sub-mirrors/Raid-5 meta devices 3. # metaroot metaroot setup system fils for / (root). edit the file /etc/system and /etc/vfstab 4. # metastat metastat display status for meta devices or hot spare pool 5. # metasync metasync handle meta device resync during reboot. 6. # metattach metattach # metdetach metdetach attach/detach a meta device 7. # metainit metainit configure the meta device 8. # metaparam modify the parameters of the meta devices 9. # growfs non-destructively expand a UFS file system 10. # metaclear metaclear delete active meta devices and hot spare spools 11. # metadb create & delete replicas of the meta device state database 12. # metahs metahs manage hot spares and hot spare spools
To create a replica/meta state database/metadb: Remember: 1. Before creating a replica, make sure the existence of the dedicated slice to the replica. 2. we can create a slice with 50mb or 100mb. 3. No file system is required in that slice. 4. Once the slice is dedicated for the replica, it can't be deleted unless all the metadevices are removed. 5. We cannot remove a single replica from the slice. All the replicas available in the slice can be deleted.
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 203
# metadb -afc3 c0d1s7 here metadb = is the command to create the replicas -a = to add the replica to the slice c0d1s7 -f = since we are creating the replica for the first time -c = specify the count of the replica. Note: 1. Minimum requirement is 3 replica. # metadb # metadb -i will display the i nformation nformation about the existing replica and its i ts status. # metadb -d c0d1s5 will delete all the replicas created at the slice c0d1s5. # metadb -d -f c0d1s7 will forcefully delete all the replicas in the slice c0d1s7. This is done when the last, least le ast replica is going to be removed. removed. Note: Before deleting the last replica, make sure no meta device is existing. Outputs: bash-3.00# metadb -afc6 c1d0s7 bash-3.00# metadb flags first blk a u 16 a u 8208 a u 16400 a u 24592 a u 32784 a u 40976
Eg: 1. # metainit d0 1 1 c0d1s4 here metainit = to create a meta device d0 = name of the meta device 1 = number of stripes 1 = number of physical components/slices c0d1s4 = is the physical component 2. # metainit d1 1 2 c0t1d0s4 c0t2d0s4 here Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 204
d1 = name of the meta device 1 = number of stripes 2 = number of physcial components c0t1d0s4, c0t2d0s4 = are the physical components so, a meta device d1 is going to have 1 stripe with 2 slices. Note: Make sure that the slices exist before before creating a meta device. 3. # metainit d3 3 1 c0t0d0s4 1 c0t2d0s2 2 c0t3d0s1 c0t3d0s3 A B C here d3 = is the name of the meta device 3 = number of stripes here 3 stripes A = first stripe has 1 physical component c0t0d0s4 B = second stripe has 1 physical component c0t2d0s2 Note: A complete hard disk connected to the target t2 is dedicated to the stripe 2. C = third stripe has 2 physical components c0t3d0s1, c0t3d0s3 To remove the meta device: Note: Before clearing the meta device make sure that the meta device is umounted. # metaclear <meta_device_name> eg: # metaclear d0 will remove the meta device d0 To view the status of the meta devices: # metastat # metastat -p will display the status of the meta device Outputs: Note: To create a metadevice, displaying the meta device status, clearing the meta device bash-3.00# metainit d0 1 1 c1d0s3 d0: Concat/Stripe is setup bash-3.00# metastat
To clear the meta device: bash-3.00# metaclear d10 d10: Concat/Stripe is cleared bash-3.00# metastat -p d20 2 2 c2d0s0 c2d0s1 -i 32b \ 2 c2d0s3 c2d0s4 -i 64b d0 1 1 c1d0s3
Creating a mirror: 1. A mirror is a meta device composed of one or more sub-mirrors. Sub-mirror: a. Is made up of one or more striped or concatenated meta deices. b. Each meta device within a meta mirror is called a sub mirror. 2. Mirroring data provides us with maximum data availabilty by maintaining multiple copies of our data. 3. The system must contain atleast 3 state database replica before creating mirrors. 4. Any file system including / (root), swap and /usr or any application such as database can use a mirror. 5. An error on a component component does not cause the entire mirror to fail. 6. To get maximum protection & performance, place mirrored (mirrors) meta devices on different physical components (disks) & on different disk controllers. Since the primary purpose of mirroring is to maintain availabilty of data, defining mirrored meta devices on the same disk is NOT RECOMMENDED. 7. When mirroring existing file system/data, be sure that the existing data is contained in the sub-mirror. When second sub-mirror is subsequenlty attached, data from the initial sub-mirror is copied on ot the attached sub-mirror.
What to do? 1. Create a simple meta device ( 1 stripe with 1 slice) 2. Create another simple meta device ( 1 stripe with 1 slice) 3. Create a mirror meta device and associate with one meta device (adding first way sub-mirror) 4. Attach another meta device with mirror meta device (adding second sub-mirror) 5. Mount the mirrored meta device 6. Access the mount point. How to do? 1. # metainit d10 1 1 c0t1d0s3 2. # metainit d20 1 1 c0t2d0s3 3. # metainit metainit d30 -m d10 Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 207
A B A = main mirror B = sub-mirror Converting d10 into d30 as a mirror. 4. # metattach d30 d20 attaching d20 to d30. d20 is the second sub-mirror. 5. # metastat metastat | grep % will display the sync status. 6. # newfs /dev/md/rdsk/d30 # mkdir /mirror /mirror # mount /dev/md/dsk/d30 /mirror # cd /mirror Note: 1. Sync will happen after attaching to the mirror. 2. Slices has to be with same size & geometry, if not greater than the source size is recommended. How to break the mirror: 1. Deattach the sub-mirror from the mirror which is umounted. 2. Clear the mirror and sub-mirror meta devices. 3. Mount the individual slice, the same data will be avialable in both the physical components. How to do? 1. # metadetach d30 d20 # metadetach will remove d20 from the meta device d30. NOTE: THE EXAMPLE COMMANDS AND 2. # metaclear metaclear d20 OUTPUTS WILL DIFFER FOR ALL Clears/removes the meta device d20 REALTED TO SVM 3. # metaclear -r d30 will removes both the meta device d30 and d10. 4. # mkdir mkdir /d1 # mkdir /d2 # mount /dev/dsk/c0t1d0s3 /d1 # mount /dev/dsk/c0t2d0s3 /d2 # ls /d1 ; ls /d2 will display the contents of /d1 and /d2 respectively. contents remains same in both the slices and mount point.
Outputs: bash-3.00# metainit d30 1 1 c1d0s4 d30: Concat/Stripe is setup bash-3.00# metainit d40 1 1 c2d0s6 d40: Concat/Stripe is setup bash-3.00# metainit d35 -m d30 d35: Mirror is setup
NOTE: THE EXAMPLE COMMANDS AND OUTPUTS WILL DIFFER FOR ALL REALTED TO SVM
bash-3.00# metattach d35 d40 d35: submirror d40 is attached bash-3.00# metastat | grep % Resync in progress: 45 % done Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Replacing the failure hard disk drives: 1. If any sub-mirror fails, still data can be accessed using mirror device. 2. Suppose if we remove one disk which contains the 2nd sub-mirror, still we can access the data. 3. But the state of the 2nd sub mirror will be OKAY in the output of 'metastat' command, till we create any new file/modification in the mirror is preformed. After chages only, the state will be chanaged to 'MAINTENANCE'. Replacing the failed disk with different target: # metareplace # metareplace d30 c0t1d0s3 c0t3d0s5
Replacing the failed disk in the same target/destination: # metareplace -e d30 c0t1d0s3
Soft partition: 1. Dividing one logical component component (meta device) into many soft paritions. It can be l aid out over physical disk/slices. # metainit d5 1 1 c0t11d0s6 Consider the size of the c0t11d0s6 size is 10gb. Then the size of the meta device d5 is of 10gb. # metainit d61 -p d5 1g A B C here metainit = to create a soft partition -p = to create a soft partition A d61 = the new meta device going to be created Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 209
B d5 = is the existing meta device with 10gb of sise. C 1g = is the size of the new meta device d61 # metainit d62 -p d5 1g # metaclear d61 Removes the soft partition d61 only. # metaclear -p d5 Will remove all soft partitions from d5. Soft partition is a means of dividing a disk or volume into as many partitions as needed, overcoming the current limitation of 7. This is done by creating logical partitions within physical disk slices or logical volumes. Soft partitions differ from hard disk slices created using 'format' command because soft partitions can be non-contiguous, where as a hard disk slices are contiguous. Therefore soft partitions can cause I/O performance degradation. Note: 1. No automatic problem detection is available in SVM. 2. The SVM s/w does not detect problems with state database/replica until there is a change to an existing SVM configuration and an update to the database replicas is required. If in-sufficient state database replicas are available, you'll need to boot to single user mode, and delete/replace enough of the corrupted/missing database replicas to achieve the quoram. Outputs: soft partitions: d502: Soft Partition Device: d500 State: Okay Size: 204800 blocks (100 MB) Extent Start Block 0 2097216 d500: Concat/Stripe Size: 7143424 blocks (3.4 GB) Stripe 0: (interlace: 32 blocks) Device Start Block Dbase c2t2d0s1 0 No c2t2d0s3 0 No c2t2d0s4 0 No c2t2d0s5 0 No c2t14d0s4 0 No c2t14d0s5 0 No c2t14d0s6 0 No d501: Soft Partition Device: d500 State: Okay Size: 2097152 blocks (1.0 GB) Extent Start Block 0 32 Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Block count 204800
State Reloc Hot Spare Okay Yes Okay Yes Okay Yes Okay Yes Okay Yes Okay Yes Okay Yes
Expanding a file system: Note: 1. Once a file system is expanded it cannot be shrink. 2. Aborting a 'growfs' command may cause temporary loss of free space. The space can be recovered using 'fsck' command after the file system is ummounted using 'umount'. 3. 'growfs' command non-destructively expands a file system upto the size of the file system's physical device or meta device. 4. 'growfs' write locks the file system when expanding a mounted file system. Access times are not kekp while the file system is write-locked. The 'lockfs' command can be used to check the file system losck status and unlock the file system in the unlikely event that 'growfs' aborts without unlocking the file system. 5. We can perform, a. expanding a non-metadevice component b. expanding a mounted file system c. expanding a mounted file system to an existing meta mirror d. expanding an um=nmounted file system e. expanding a mounted file system using stripes. 6. 'growfs' a. attach the disk space b. grow the space 1. # newfs /dev/rdsk/c0t1d0s3 # mkdir /expand /expand # mount /dev/dsk/c0t1d0s4 /expand # metainit -f d100 1 1 c0t1d0s3 # umount /expand # mount /dev/md/dsk/d100 /expand # metattach d100 c0t10d0s6 New slice6 is attached attached to d100 # growfs -M /expand /dev/md/rdsk/d100 Raw disk is expanded now Growing a mirror: 1. Attach each individual component to each sub-mirror 2. Grow the mirror # metainit d21 1 1 c0t10d0s3 => 400mb # metainit d22 1 1 c0t11d0s3 => 400mb # metainit d23 -m d21 => one-way mirror # metattach d23 d22 => two-way mirror # newfs /dev/md/rdsk/d23 # mkdir /mirror # mount /dev/md/dsk/d23 /mirror # metattach d21 c0t10d0s4 => attaching disk space 400mb to sub-mirror # metattach d22 c0t11d0s4 => attaching disk space 400mb to sub-mirror # growfs -M /mirror /dev/md/rdsk/d23 Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 212
=> mirror will be exapnded exapnded to 800mb. # df -h
Growing the RAID-5 device: # metainit d75 -r c0t10d0s3 c0t10d0s4 c0t11d0s3 => each slice with 400mb # newfs /dev/md/rdsk/d75 # mkdir /raid5 # mount /dev/md/dsk/d75 /raid5 # metattach d75 c0t11d0s6 Slice size in 500 mb; but it'll take only 400mb # growfs -M /raid5 /dev/md/rdsk/d75 # df -h Note: The newly attached slcie will have only data. It won't be used for storing parity information. -M = (directory name) The file system to be expanded is mounted on directory name. File system locking will be used. Outputs: Growing the file system: bash-3.00# df -h Filesystem size used avail capacity Mounted on /dev/dsk/c1t0d0s0 9.6G 5.0G 4.5G 53% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 2.1G 1.5M 2.1G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab fd 0K 0K 0K 0% /dev/fd swap 2.1G 80K 2.1G 1% /tmp swap 2.1G 40K 2.1G 1% /var/run /dev/md/dsk/d5 466M 1.1M 419M 1% /mnt/mirror /dev/md/dsk/d50 935M 1.0M 887M 1% /mnt/concat_grow bash-3.00# pwd /mnt/concat_grow bash-3.00# growfs -M /mnt/concat_grow /dev/md/rdsk/d50 /dev/md/rdsk/d50: Unable to find Media type. Proceeding with system determined parameters. /dev/md/rdsk/d50: 3047424 sectors in 93 cylinders of 128 tracks, 256 sectors 1488.0MB in 47 cyl groups (2 c/g, 32.00MB/g, 15040 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 65824, 131616, 197408, 263200, 328992, 394784, 460576, 526368, 592160, 2434336, 2500128, 2565920, 2631712, 2697504, 2763296, 2829088, 2894880, 2960672, 3026464 bash-3.00# pwd /mnt/concat_grow Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
ROOT MIR RORING: RORING: WHAT TO DO? 1. Ensure that the alternate disk has equal geometry & size. 2. Take backup of /etc/system and /etc/vfstab file. 3. Copy VTOC from root (booting) disk to the alternate disk. 4. Ensure that the state database is created. 5. Convert the root slice as a logical component forcefully. 6. Create another metadevice for duplicating root slice. 7. Convert the swap slice as a logical component forcefully. 8. Create another metadevce for duplicating the swap slice. 9. Associaiate first sub-mirror sub-mirror (for root) to mirror root. 10. Associate first sub-mirror (for swap) to mirror swap. 11. Update the system & vfstab file by running 'metaroot' command. 12. Reboot the system. 13. Associate the second sub-mirror to mirror root. 14. Associate the second sub-mirror to mirror swap. 15. Install boot block or grub in the alternate root slice 16. See the physical path for the alternate disk 17 Set alias name in the OK prompt. 18. Set boot sequence in OK prompt.
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 216
How to do? 1. # format format To create the slices manually. 2. # cp /etc/system /etc/system.orig /etc/system.orig # cp /etc/vfstab /etc/vfstab.orig /etc/vfstab.orig 3. # prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t12d0s2 Note: fmthard -> populate lable on the new hard disk drive 4. # metadb -afc3 c0t8d0s7 c0t10d0s7 c0t12d0s7 (if the replicas are existing, this step can be avoided) 5. # metainit metainit -f d5 1 1 c0t8d0s0 Converting forcefully the root slice as a metadevice 6. # metainit d10 1 1 c0t12d0s0 Creating another metadevice for root 7. # metainit -f d25 1 1 c0t8d0s1 Converting forcefull the swap slice as a metadevice 8. # metainit d30 1 1 c0t12d0s1 Creating another metadevice for swap 9. # metainit d15 -m d5 Associating d5 with d15 Here d15 = main mirror for root 10. # metainit d35 -m d25 Associate d35 with d25 Here d35 = main mirror for swap 11. # metaroot metaroot d15 a. 'metaroot' edits the file /etc/system and /etc/vfstab so that the system may be booted with the root filesystem on a meta device. b. 'metaroot' may also be used to edit the files so that the system may be booted with root file system on a conventional disk device. c. Observe the changes to the files /etc/vfstab and /etc/system. 12. # init 6 Note: Make sure the sync is completed before rebooting the system by executing the command # metastat | grep % 13. # metattach d15 d10 For root adding sub-mirror 14. # metattach d35 d30 For swap adding sub-mirror 15. # cd /usr/platform/`uname -m`/lib/fs/ufs # installboot bootblk /dev/rdsk/c0t12d0s0 Installing the boot block to the SPARC machine Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 217
# installgrub -fm /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1d0s0 Installing the grub in the X86 machine. 16. # ls -l /dev/dsk/c0t12d0s0 Will display the physical path of the logical device. Pls make a note of the physical path. 17. OK nvalias 18. OK setenv boot-device
BREAKING THE MIRROR: # metadetach d120 d100 # metaroot metaroot c0t0d0s0 c0t0d0s0 c0t0d0s0 = raw disk of source disk which is running with OS. will revert the /etc/system & /etc/vfstab to the default status. # init 6 # metaclear d100 # metaclear -r d120 Raid-5: # metainit <meta_device_name> -r # metainit d100 -r c0t0d0s4 c0t1d0s4 c0t2d0s4 -r = specifies that the configuration is RAID level 5. # metastat | grep % HOT SPARE: 1. Hot spare faciltiy included with Disk Suite allows automatic replacement of failed submirror/RAID-5 components, provided spare components are avialable & reserved. 2. Because component replacement & resyncing of failed components is automatic. 3. A hot spare is a component that is running (but not being used) which can be substituted for a broken component in a sub-mirror of two or three way meta mirror or RAID-5 device. Note: 4. Failed components in a one-way meta mirror cannot be replaced by a hot spare. 5. Components designated as hot sapres cannot be used in sub-mirrors or another meta device in the 'md.tab' file. They must remian ready for immediate use in the even of a component failure. Hot spare states: 1. Has 3 states a. Available b. In-use c. Broken a. Available: 'Available' hot spares are running and ready to accept data, but are not currently being written to or read from.
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 218
b. In-use: 'In-use' hot spares are currenlty being written to and read from. c. Broken: 1. 'Broken' hot spares are out of the service. 2. A hot spare is placed in the broken state when an I/O error occurs. 2. The number of hot spare pools is limited to 1000. Defining Hot spare: 1. Hot spare pools are named as 'hspnnn' where 'nnn' is a number in the range 000-999 2. A metadevice cannot be configured as a hot spare. 3. Once the hot spare pools are defined and associated with a sub-mirror, the hot spares are "availabe" for use. If a component failure occurs, disk-suite searches through the list of hot spares in the assinged pool and selects the first 'available" compoenet that is equal or greated in disk capacity. 4. If a hot spare of adequate size is found, the hot spare state changes to "in-use" and a resync operation is automatically performed. The resync operation brings the hot spare into sync with other sub-mirror or RAID-5 components. 5. If a component of adequate size is "not found" in the list of host spare, the sub-mirror that failed is considered "erred" and the porting of the sub-mirror no longer replicated the data. Hot spare conditions to avoid: 1. Associating hot spares of the wrong size with sub-mirror. This condition occurs when hot spare pools are defined and associated with a sub-mirror & none of the hot spares in the hot spare pool are equal to or greater than the smallest component in the sub-mirror. 2. Having all the hot spare spare withing the hot spare spare pool in use. In this case immediate action is required: a. 2 possible solutions or actions can be taken i. First is to add additional hot spare ii. To repair some of the components that hace been hot spare replaced Note: If all hot spare are in-use and a sub-mirror fails due to errors, that portion of the mirror will no longer be replicated. Manipulating hot spare spools: 1. # metahs metahs = adding hot spares to hot spare pools = deleting hot spares from hot spare pool = replacing hot spares in hot spare pools = enabling hot spare = checking the status of the hot spare spare Adding a hot spare: Creating a hot spare spool: 1. # metainit hsp000 c0t2d0s5 Creates a hot spare device with the name 'hsp000' 2. # metainit # metainit hsp001 c0t1d0s4 c0t11d0s4 (or) # metahs -a hsp001 c0t1d0s4 c0t11d0s4 Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 219
-a = to add a hot spare -i = to obtain the information Deleting hot spare: 1. Hot spares can be deleted from any or all the hot spare pools to which they have been associated. 2. When a hot spare is delted from a hot spare pool, the position of the remianinig hot spares changes changes to reflect the new position. For eg, if the second of 3 hot spares spares in a hot spare spool is deleted, the 3rd hot spare moves to the seocnd position. 3. # metahs -d hsp000 c0t11d0s4 Removes the slice from the hot spare spare pool -d = to delete 4. Removing hot spare pool: Note: Before removing the hot spare pool, remove all the hot spare fromthe pools using 'metahs' with -d options and provide hot spare name. # metahs -d -d = deletes only the spare # metahs -d To delete the hot spare pool Replacing hot spare: Note: 1. Hot spares that are in the 'In-use' state cannot be replaced by other hot spare. 2. The order of hot spares in the hot spare pools is NOT CHANGED when replacemebt occurs. 3. # metahs -r # metahs -r hsp000 c0t10d0s4 c0t11d0s4 c0t11d0s4 replaces c0t10d0s4 Associting the hot spare pool with sub-mirror/Raid-5 metadevice: 1. # metaparam modifies the parameters of the meta devices. # metaparam -h # metaparam -h hsp000 d101 # metaparam -h hsp000 d102 Note: Where d101, d102 sub-mirrors of d103 mirror. where -h = specifies the hot spare spool to be used by a meta device Disassociating the hot spare pool with sub-mirror/raid-5 metadevice: # metaparam -h none # metaparam -h none d101 # metaparam -h none d102 where, 'none' specifies the meta decie is disassociated with the hot spare pool associated to it. Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 220
# metahs -d hsp000 c0t2d0s5 c0t2d0s6 # metahs -d hsp000 # metaclear d100 # metadetach d15 d12 # metaclear d12 # metaclear -r d15 To view the status fo hot spare pool: # metahs -i Note: Suppose the failed disk is going to be repalced to free up hot spare. # metadevadm updates the meta device information -u = obtain the device ID associated with the disk specifier. This option is used when a disk drive has had its device ID changed during a firmware upgrade or due to changing the controller of a storage. -v = execution in verbose mode. Has not effect when used with -u option. verbose is default. # metadevadm -v -u Updating the device infomation. # metadevadm -v -u c0t11d0s4 # metareplace -e d103 c0t10d0s3 To replace in the same location 1. Now hot spare will be available 2. Stuatus of the spare disk will change from 'inuse' to 'available' 'available' Outputs: bash-3.00# metahs -a hsp001 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4 hsp001: Hotspares are added bash-3.00# metahs -i hsp001: 4 hot spares Device Status Length Reloc c0t9d0s0 Available 1027216 blocks Yes c0t9d0s1 Available 1027216 blocks Yes c0t9d0s3 Available 1027216 blocks Yes c0t9d0s4 Available 1027216 blocks Yes Device Relocation Information: Device Reloc Device ID c0t9d0 Yes id1,sd@SFUJITSU_MAG3182L_SUN18G_01534930____ id1,sd@SFUJITSU_MAG31 82L_SUN18G_01534930____ bash-3.00# bash-3.00# metastat -p d5 -m d0 d10 1 d0 1 1 c0t8d0s0 d10 1 1 c0t10d0s0 d15 1 1 c0t12d0s0 hsp001 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4 hsp001: 4 hot spares Device Status Length Reloc Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 221
c0t9d0s0 Available 1027216 blocks Yes c0t9d0s1 Available 1027216 blocks Yes c0t9d0s3 Available 1027216 blocks Yes c0t9d0s4 Available 1027216 blocks Yes bash-3.00# metahs -a hsp001 c0t9d0s5 hsp001: Hotspare is added bash-3.00# metastat -p d5 -m d0 d10 1 d0 1 1 c0t8d0s0 d10 1 1 c0t10d0s0 d15 1 1 c0t12d0s0 hsp001 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4 c0t9d0s5 bash-3.00# metahs -d hsp001 c0t9d0s5 hsp001: Hotspare is deleted bash-3.00# metahs -i hsp001: 4 hot spares Device Status Length Reloc c0t9d0s0 Available 1027216 blocks Yes c0t9d0s1 Available 1027216 blocks Yes c0t9d0s3 Available 1027216 blocks Yes c0t9d0s4 Available 1027216 blocks Yes Device Relocation Information: Device Reloc Device ID c0t9d0 Yes id1,sd@SFUJITSU_MAG3182L_SUN18G_01534930_ id1,sd@SFUJITSU_MAG31 82L_SUN18G_01534930_ bash-3.00# metahs -r hsp001 c0t9d0s3 c0t9d0s5 hsp001: Hotspare c0t9d0s3 is replaced with c0t9d0s5 bash-3.00# metahs -i hsp001: 4 hot spares Device Status Length Reloc c0t9d0s0 Available 1027216 blocks Yes c0t9d0s1 Available 1027216 blocks Yes c0t9d0s5 Available 1027216 blocks Yes c0t9d0s4 Available 1027216 blocks Yes Device Relocation Information: Device Reloc Device ID c0t9d0 Yes id1,sd@SFUJITSU_MAG3182L_SUN18G_01534930____ id1,sd@SFUJITSU_MAG31 82L_SUN18G_01534930____ bash-3.00# metahs -d hsp001 metahs: ent250: hsp001: hotspare pool is busy bash-3.00# metahs -d hsp001 c0t9d0s0 c0t9d0s1 c0t9d0s5 c0t9d0s4 hsp001: Hotspares are deleted bash-3.00# metahs -d hsp001 hsp001: Hotspare pool is cleared bash-3.00# metahs -i metahs: ent250: no hotspare pools found metaparam -h hsp005 d0 bash-3.00# metaparam -h hsp005 d10 bash-3.00# metastat -p d5 -m d0 d10 1 d0 1 1 c0t8d0s0 -h hsp005 d10 1 1 c0t10d0s0 -h hsp005 d15 1 1 c0t12d0s0 hsp005 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4 bash-3.00# metainit d100 -r c0t8d0s1 c0t10d0s1 c0t12d0s1 Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Output - truncated: # metastat d0: Submirror of d5 State: Resyncing Hot spare pool: hsp005 Size: 1015808 blocks (496 MB) Stripe 0: Device Start Block Dbase c0t8d0s0 0 No
d10: Submirror of d5 State: Okay Hot spare pool: hsp005 Size: 1015808 blocks (496 MB) Stripe 0: Device Start Block Dbase c0t10d0s0 0 No
State Reloc Hot Spare Resyncing Yes c0t9d0s1
State Reloc Hot Spare Okay Yes
Note: SVM Additional Information on SVM: 1. Example entries to the file /etc/lvm/md.tab ## For raid-0 concatenation with stripping d80 1 3 c0t6d0s7 c0t4d0s7 c0t3d0s7 or d80 1 3 /dev/dsk/c0t6d0s7 /dev/dsk/c0t4d0s7 /dev/dsk/c0t3d0s7 ## for raid-1 mirroring d0 1 1 c0t4d0s5 d10 1 1 c0t6d0s5 d5 -m d0 ## for raid-5 stripping with parity d100 -r c0t2d0s3 c0t3d0s3 c0t5d0s3 ## for hot spare d0 1 1 c0t4d0s5 -h hsp001 d10 1 1 c0t6d0s5 -h hsp001 ### for creating replicas mddb01 -c3 c0t0d0s7
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 224
2. To update the file using a command: a. for replica: # metadb -af mddb01 b. for all meta devices: # metainit -a c. Only for a selective meta device # metainit d10 3. To delete the root mirroring: for eg: # metadettach d120 d100 # metaroot metaroot c0t0d0s0 c0t0d0s0 (Will change the entries to the file /etc/vfstab and /etc/system) # init 6 # metaclear 4. Soft partition: a. Is a means of dividing a disk or volume into as many partitions as needed, over coming the current limitation of eight(7). This is done by creating logical partitions within physical disk slices or logical volumes. b. No automatic problem detection c. the SVM s/w does not detech problems with state database/replica until there is a change to an existing SVM configuration and an update to the database replica is required. If is sufficeint state data base replicas are avialable, you'll need to boot in single user mode, and delete/replace enough of the corrupted/missing database replicas to achive a quoram. d. soft partitionss differ from hard slices creating using 'format' command because soft partitions can be non-contiguous, where as a hard slice is contiguous. Therefore soft partitions can cuase I/O performance degradation.
Outputs: Examples for editing the file /etc/lvm/md.tab CREATING THE MIRROR BY EDITING THE FILE /ETC/LVM/MD.TAB
"/etc/lvm/md.tab" 57 lines, 1453 characters # # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # ident "@(#)md.tab 2.5 03/09/11 SMI" # # md.tab # # metainit utility input file. # # The following examples show the format for local metadevices, and a Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 225
# similar example for a shared metadevice, where appropiate. The shared # metadevices are in the diskset named "blue": # # Metadevice database entry: # # mddb01 /dev/dsk/c0t2d0s0 /dev/dsk/c0t0d0s0 # # Concatenation of devices: # # d10 2 1 /dev/dsk/c0t2d0s0 1 /dev/dsk/c0t0d0s0 # blue/d10 2 1 /dev/dsk/c2t2d0s0 1 /dev/dsk/c2t0d0s0 # # Stripe of devices: # # d11 1 2 /dev/dsk/c0t2d0s1 /dev/dsk/c0t0d0s1 # blue/d11 1 2 /dev/dsk/c2t2d0s1 /dev/dsk/c2t0d0s1 # # Concatenation of stripes (with a hot spare pool): # # d13 2 2 /dev/dsk/c0t2d0s0 /dev/dsk/c0t0d0s0 \ # 2 /dev/dsk/c0t2d0s1 /dev/dsk/c0t0d0s1 -h hsp001 # blue/d13 2 2 /dev/dsk/c2t2d0s0 /dev/dsk/c2t0d0s0 \ # 2 /dev/dsk/c2t2d0s1 /dev/dsk/c2t0d0s1 -h blue/hsp001 "/etc/lvm/md.tab" 57 lines, 1453 characters # RAID of devices # # d15 -r /dev/dsk/c1t0d0s0 /dev/dsk/c1t1d0s0 \ # /dev/dsk/c1t2d0s0 /dev/dsk/c1t3d0s0 # blue/d15 -r /dev/dsk/c2t0d0s0 /dev/dsk/c2t1d0s0 \ # /dev/dsk/c2t2d0s0 /dev/dsk/c2t3d0s0 # # Hot Spare Pool of devices # # hsp001 /dev/dsk/c1t0d0s0 # blue/hsp001 /dev/dsk/c2t0d0s0 # # 100MB Soft Partition # # d1 -p /dev/dsk/c1t0d0s1 100M # blue/d1 -p /dev/dsk/c2t0d0s1 100M create a raplica mddb01 -c6 c0t8d0s0 creating a metadevice d0 1 1 c0t8d0s3 d10 1 1 c0t9d0s3 ~ "/etc/lvm/md.tab" 61 lines, 1545 characters bash-3.00# metadb -af mddb01 bash-3.00# metainit -a d10: Concat/Stripe is setup d0: Concat/Stripe is setup bash-3.00# metastat d0: Concat/Stripe Size: 1027216 blocks (501 MB) Stripe 0: Device Start Block Dbase Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Device Relocation Information: Device Reloc Device ID c0t8d0 Yes id1,sd@SSEAGATE_ST318203LSUN18G_LR90137600001021 id1,sd@SSEAGATE_ST318203L SUN18G_LR901376000010210UDS 0UDS c0t9d0 Yes id1,sd@SSEAGATE_ST318203LSUN18G_LRA609240000 id1,sd@SSEAGATE_ST318 203LSUN18G_LRA609240000W0270ZT6 W0270ZT6 bash-3.00# vi /etc/lvm/md.cf "/etc/lvm/md.cf" 4 lines, 84 characters # metadevice configuration file # do not hand edit d0 1 1 c0t8d0s3 d10 1 1 c0t9d0s3
Disk Set Diskset feature lets us to set up groups of host machines and disk drives in which all of the hosts in the set are connected connected to all the drives in the set. Types of diskset: a. Local diskset b. Shared diskset Local Diskset Diskset : 1. Each host in a disk set must have a local disk set. 2. Local disk set for a host consists of all drives which are not part of a shared diskset. 3. The host's local metadevice configuration is contained within this local diskset in the local metadevice state database/replica. Shared Diskset: 1. Is a grouping of 2 hosts and disk drives in which all all the drives are accessible by both hosts. Condition: Disk suite requires that the device name be indentical on each host in the disk set. 2. There is one meta device state database per shared diskset. Note: 1. Drives shared diskset must not be in any ohter diskset. 2. None of the partitions on any of the drives in a diskset can be mounted on, swapped on or part of a local metadevice. metadevice. 3. All the drives in a shared diskset must be accessible by both hosts in the diskset. 4. Metadevices & hotspare pools in any diskset must consists of drives within that dataset. Like wise, metadevices & host spare pools in the local diskset must be made up of drives from within the local diskset. Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 227
Naming convention: 1. Metadevices within the local diskset use the standard disk suite naming conventions. 2. Metadevi Metadevices ces within within the shared diskset use the following conventions. /dev/md/setname/(r)dsk/dnumber (usually 0 to 127) Eg: /dev/md/dataset/(r)dsk/d10 3. Hotspare: setname/hsp000 As usual, 0-999 Note: -s = Options is used iwth the standard disk suite commands to create, remove and administer metadevices/hotspare pools. If -s option is NOT used, the command affects only the local diskset.
Defining disksets: NOTE: 1. Before administering the diskset, make sure, a. The installation of the disk suite software on each host b. Each host must have local database replicas setup. 2. All disk that we plan to share between hosts in the diskset must be connected to each host ans must have the same name on the each host. 3. 2-basic operations involved in defining disksets. a. Adding hosts (adding the first host defines the disk set) b. Adding drives Syn: # metaset -s -a -h Eg: # metaset -s dataset -a -h node1 node2 Where -a = to add -h = to specify the host NOTE: 1. Adding the first host create the diskset. 2. The last host cannot cannot be deleted untill all of the drives within the set have been deleted. 3. A host name is not accecpted if all the drives within the diskset cannot be found on each specified host. IN addition, a drive is not accepted if it cannot be found on all the hosts in the diskset. # metaset metaset - Displays the status of the diskset
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 228
Adding drives to the diskset: Syn: # metaset -s -a Eg: # metaset -s dataset -a c2t1d0 c2t2d0 c2t3d0 c2t4d0 NOTE: 1. A drive name is not accepted if it cannot be found on all hosts specified as part of the disk set. # metaset metaset Now we can observe the difference since disk are added to the diskset. First host (here node1 is the owner of the diskset dataset). 2. Drives are repartitioned when they are added to the diskset, only if slice7 is not setup properly. A small portion of each drive is reserved in slice7 for use by disksuite software. 3. The disk suite software tries to balance a reasonable number of replicas across all drives in a diskset. 4. Each drive in the diskset is probed once every second to determine that is still reserved. Administering disksets: 1. Reserved or reserving a diskset 2. Releasing a diskset Reserving a diskset: 1. Before a host can use drives in a diskset, the host must reserve the diskset. 2. a. Safely: 'metaset' checks to see if another host currenlty has the set reserved. If another host has the diskset reserved the other host will not be allowed to reserve the set. Syn: # metaset -s -t Eg: # metaset -s dataset -t b. Forcefully: Will not check wiht the other hosts Syn: # metaset -s -t -f Eg: # metaset -s dataset -t -f Rele asing asing a diskset: 1. When a diskset is released, it cannot be accessed by the host. Syn: # metaset -s -r Eg: # metaset -s dataset -r Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 229
# metaset metaset Observe the changes
Removing hosts & drives from disksets: NOTE: 1. When drives are removed from/added to the diskset, disksuite balances the metadevices state database replicas across the remianing drives. Syn: # metaset -s -d Eg: # metaset -s dataset -d c2t3d0 2. -f = Option Opti on must be used when deleting the last drive in a set, since this drive would implicitly contain the last state database replica. 3. The last host can be removed from a diskset only after all drives in the diskset have been removed. Removing the last host from the diskset destroys the diskset. Eg: # metaset -s dataset -d -h node2 Here, the diskset will be be removed from the host node2) # metaset metaset Observe the changes Adding drives or hosts to the diskset: # metaset -s dataset -a c2t5d0 To add the drives # metaset -s dataset -a -h node2 To add hosts
NOTE: Disk suite, a maximum of 2 hosts per diskset are supported.
VERITAS VERIT AS VOLUME MANAGER MANAGER STORAGE FOUNDATION SPECIALIST FOR UNIX 250-250 (Offered by Symantec) Sun Sun Microsystem s offers Certifi catio n as Sun Sun Certi Certi fie d Veritas Volume Administrator Comparison Comparison of Solaris Solaris Vo lume Manager Manager Soft ware ware & Verit as Volume Manager Manager Sun Microsystems, discourage the use of Verit Veritas as on the system disk (root disk/boot disk. Veritas do not, by default, correspond to partitions. In the situation, irrespective of the cause, where the system no longer boots, the sysem administrator must be able to gain access to the file system on the system disk without the drives of the volume management software. This is guaranteed to be possible when each volume corresponds to a partition in the volume tableof contents (VTOC) of the system disk. Solaris Volume Manager volumes can be accessed even when booted from CD-ROM. This inturn eliminates the need of breaking off a mirrror dring upgrades, thus reducing downtime and complexity of such an operation. SVM software reservers the correspomdence between the volumes defined in its state database, and the disk partitions defined in the disk lable (VTOC), at all times; disaster recovery is always possible by s standard method, without extra complecations. It’s easy to grow /var using the VxVM graphical tool. This can be done by anyone at any time, to solve a disk space problem. However, this breaks the volume-partition relation as the /var volume in now a concatenation of two (not necessarily contiguous) sub-disk. When a disk breaks, the replacement disk is initialized. Slices 3 and 4 become the VxVM private and public region, subdisks are allocated to be mirrored with the surviving disk. Partitions may be created by VxVM software for these subdisks. There are 2 drawbacks to using SVM software in combinations with VxVM software: 1. Cost 2. SVM software requires that a majority of the state databases be found at boot time (the quorum rule). When all data disks are under VxVM software, only two disks may be left under SVM software. If one of these disks breaks, there is no state database quorum and the system will not boot without manual intervention. NOTE: The intervention consists of removing the inaccessible state database copies (using the metadb –d command) and rebooting. In the 2-disk configuration, the quorum rule in /etc/system can disable. The system will then will boot unattended, even with one disk. Is storage management software used to manage volumes, to manage data.
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 232
How data are stored? Hard disks are formatted and information are stored using 2 methods. 1. Physical storage layout 1. Logical storage layout VxVM uses both the physical objects and virtual objects to handle the storage management. PHYSICAL DISK / PHYSICAL OBJECT: OBJECT: are hardware with block and raw OS device interfaces that are used to store the data. VIRT UAL OBJECTS: OBJECTS: 1. When one or more physical disks are brought under the control of veritas, it creates virtual objects called VOLUME, on those physical disks. 2. Volumes and their physical components are called virtual objects or VxVM objects. NOTE: VxVM control is accomplished only of VxVM takes control of the physical disk and the disk is not under the control of another storage manager such as SVM. Before the disk can be brought under VxVM control, the disk must be accessible through the operating system device interface. VxVM is layered on top of the OS interface services and is dependent upon how the OS access physical disks. VxVM is dependent upon the OS for the following functionality: 1. OS disk devices 2. Device handles Virtual Data Storage: Volume Manager creates a virtual layer of data storage. 1.
Virtual storage object that is visible to users and application is called a VOLUME.
2.
A volume is a virtual object, created by VxVM that stores the data.
3.
Made up of space from one or more physical disks on which the data is physically stored.
4.
Volume manager volume appears to applications to be physical disk partitions.
5.
All users and applications access volumes as contiguous address space using special device files in a manner similar to accessing a disk partition.
6. Volumes have block & character device nodes in the /dev tree. For eg: /dev/vx/( r ) dsk/..
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 233
VOLUME MANAGER CONTROL: When we place a disk under VxVM control, a CDS disk layout is used, which ensures that the disk is accessible on different platforms, regardless of platforms on which disk was initialized.
Comparing CDS & Sliced disks CDS 1. Private region (meta data) and public region (user data) are created on a single partition. 2. Suitable for moving between different operating system. 3. Not suitable for boot partitions.
SLICED SLI CED DISKS 1. Private region & public region are created on separate partitions. For eg at 3 and 4 2. Not suitable for moving between the different operating system. 3. Suitable for boot partitions.
Note: 1. CDS (Common Disk Layout) disks have specific disk layout requirement requirement that enable a comman disk layout across different platforms, and these requirements are not compatible with the particular platform specific requirement of boot devices. ( CDS requires a Vertias Storage Foundation License) 2. And therefore, when placing a boot disk under volume. manager control, we must use a SLICED DISK layout Private Region: 1. Is similar to metadb or replica in Solaris Volume Manager Software. 2. Will be created at the time of initialization initialization of disk to VxVM control. control. 3. Adding the disk to disk group, the media name, disk access name, the disk name and the disk configurations all are written to the private region. LOGICAL OBJECTS: OBJECTS: 1. vmdisk 2. disk group 3. sub dis 4. plex 5. volume PHYSICAL OBJECTS: OBJECTS: 1. Controllers 2. Disks VMDISK: 1. When a disk is brought under the control of VxVM that disk is called VMDISK. 2. Can bring the disk under VxVM by 2 methods. a. Initialization: 1. Initialize the disk as vmdisk 2. The entire entire data on the disk will will be overwritten, overwritten, i.e., i.e., the data in the disk will be destroyed. destroyed. Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 234
b. Encapsulation: 1. When a disk is brought under the control of VxVM with encapsulation, all the data (partition) (partition) in the disk will be preserved. preserved. 3. Upto 31 characters. DISK GROUP: 1. Is a collectionn of voulme manager disks that have been put together into a logical grouping. 2. Grouping of disk is for management purpose, such as to hold the data for a spefic application or set of o f applications. applications. 3. Voulme managerobjects CANNOT span disk groups. For eg: volumes SUB-DISKS, PLEXES and disk must be derived from the same disk group. Can create additional disk group as necessary. 4. Disk group ease the use of devices in a high availability environment, because a disk group and its components can be moved as a unit from the host machine to another. SUB-DISKS: 1. Volume manager disk can be divided into one or more sub-disks. 2. Is a collection of contiguous blocks that represent a specific portion of a volume manager disk, which is mapped to a specific regions of physical disk. 3. Is a subsection of a disk's public region. 4. Is the smallest unit of storage in volume manager. 5. Conceptually, a sub-disk is similar to a partition. 6. Max size of a sub-disk is the size of the vxdisk. 7. Can create 4096 sub-disks/vmdisk. 8. Sub-disk cannot be shared among two plexes. PLEX: 1. Voulume manager uses sub-disks to build virtual objects called PLEXES. 2. Is a structured or ordered collection on sub-disks from one or more vmdisk. 3. Cannot be shared by 2 volumes. 4. Maximum number of plexes per volumes is 32 5. Between 2 plexes of same volume mirroring occurs by default. 6. Can have minimum one sub-disk and maximum of 4096 sub-disks 7. 3 types of plexes a. Complete plex: holds a complete copy of a volume b. Log plex: dedicated to logging c. Sparse plex: 1. which is not a compelete copy of the 2. Sparse plexes are not used in newer versions fo voulme manager. 8. Can organixe data on sub-disks to form a plex by using the following a. Concatenation b. Striping c. Mirroring d. Striping with parity
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
volume volu me
Sun Solaris 10 Operating System
Page 235
VOLUME: 1. Is a collection of plexes 2. Is a virtual storage device that is used by applications in a manner smimilar to physical disk. Due to its virtua in nature a volume is not restricted by the physical physical disk size constraints constraints that apply to physical disk. 3. Volume can be as large as the total sum of avialable unreesrved free physical disk space 4. Minimum of plex in a volume is 1. Maximum of plexes in a volume is 32. 5. Size of the volume is the size of the least plex. 6. Maximum size of a volume is a size of the disk group.
DAEMONS: 1. vxconfigd - main configuration daemon of VxVM responsible for maintaining the vmdisk & disk group information 2. vxreload - responsible for hot relocation 3. vxsus - requires open VEA (Veritas Enterprise Adminstrator) 4. vxnotify - responsible for notifying device object failure 5. vxoid - provides I/O operations
PACKAGE NAME: 1. VRTSvlic - licensing utilities 2. VRTSVxVM - VxVM binaries 3. VRTSob - VEA service 4. VRTSfspro - File system service provider 5. VRTSfsman 6. VRTSvxman - manual pages pages 7. VRTSobgui - VEA graphical user interface 8. VRTSVmpro - Diks management service provider 9. VRTSvxfs - VXFS software and manual pages.
TO SET THE T HE ENVIRONMENT ENVIRONMENT VARI ABLES (Ater installation of t he VxVM) Edit the file /etc/profile /etc/profile PATH=$PATH:/opt/VRTS/bin:/etc/vx/bin MANPATH=$MANPATH:/opt/VRTS/man export PATH MANPATH :wq!
NOTE: Most commands are located in 1. /etc/vx/bin 2. /usr/sbin 3. /usr/lib/VxVM/bin INSTALLI NG THE VERITAS PRODUCTS: PRODUCTS: 1. Can install by running the script from the cdrom 2. Can install the required package manually by # pkgadd command. NOTE: While adding the package manually please ensure the following; 1. Ensure the packages are installed in the correct order 2. Always install VRTSvlic first 3. Always install the VRTSVxVM package before other VxVM packages. 4. Documentation and manual pages are optional 5. After installing the package, using OS specific commands, run vxinstall to configure VxVM for the first time. # vxinstall -> to install license key. Verifying package installation: # pkginfo -l VRTSVxVM vxinstall: 1. Is an interactive program that guides through the initial VxVM configuration 2. The main steps in vxinstall process are a. entering the license key b. select the naming method 1. Enclosure based naming 2. Traditional naming 3. If desired, set up a system-wide default disk group Output: bash-3.00# vxinstall VxVM uses license keys to control access. If you have a SPARCstorage Array (SSA) controller or a Sun Enterprise Network Array (SENA) controller attached to your system, then VxVM will grant you a limited use license automatically. The SSA and/or SENA license grants you unrestricted use of disks attached to an SSA or SENA controller, but disallows striping, RAID-5, and DMP on non-SSA and non-SENA disks. If you are not running an SSA or SENA controller, then you must obtain a license key to operate.
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 237
Licensing information: System host ID: 832d10ed Host type: SUNW,Sun-Fire-280R SPARCstorage Array or Sun Enterprise Network Array: No arrays found Some licenses are already installed. [y,n,q,?] (default: y) y
Do you wish to review them
Symantec License Manager vxlicrep utility version 3.02.16.0 Copyright (C) 1996-2006 Symantec Corporation. All rights reserved. Creating a report on all VERITAS products installed on this system -----------------***********************----------------License Key Product Name Serial Number License Type OEM ID Editions Product
= = = = = =
iezu-wdp9-dw6w- yzo4-w2z7-pp8o-p pz VERITAS Storage Foundation Standard HA 2447 PERMANENT 2006 YES
(output truncated...) bash-3.00# vxlicrep | more Symantec License Manager vxlicrep utility version 3.02.16.0 Copyright (C) 1996-2006 Symantec Corporation. All rights reserved. Creating a report on all VERITAS products installed on this system -----------------***********************----------------License Key Product Name Serial Number License Type OEM ID Editions Product
= = = = = =
iezu-wdp9-dw6w- yzo4-w2z7-pp8o-p pz VERITAS Storage Foundation Standard HA 2447 PERMANENT 2006 YES
(output truncated...) bash-3.00# vxinstall VxVM uses license keys to control access. If you have a SPARCstorage Array (SSA) controller or a Sun Enterprise Network Array (SENA) controller attached to your system, then VxVM will grant you a limited use license automatically. The SSA and/or SENA license grants you unrestricted use of disks attached to an SSA or SENA controller, but disallows striping, RAID-5, and DMP on non-SSA and non-SENA disks. If you are not running an SSA or SENA controller, then you must obtain a license key to operate. Licensing information: System host ID: 832d10ed Host type: SUNW,Sun-Fire-280R SPARCstorage Array or Sun Enterprise Network Array: No arrays found Some licenses are already installed. [y,n,q,?] (default: y) n
Do you wish to review them
Do you wish to enter another license key [y,n,q,?] (default: n) n Do you want to use enclosure based names for all disks ? [y,n,q,?] (default: n) y Starting Starting Starting Starting
the the the the
relocation daemon, vxrelocd. cache deamon, vxcached. diskgroup config backup daemon, vxconfigbackupd. dg monitoring daemon for rlinks with STORAGE protocol, vxvvrsecdgd.
Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 238
Do you want to setup a system wide default disk group? [y,n,q,?] (default: y) y Which disk group [,list,q,?] list NAME
STATE
ID
Which disk group [,list,q,?] newdg The installation is successfully completed.
bash-3.00# vxlicrep -s Symantec License Manager vxlicrep utility version 3.02.16.0 Copyright (C) 1996-2006 Symantec Corporation. All rights reserved. Creating a report on all VERITAS products installed on this system License Key Product Name License Type
= iezu-wdp9-dw6w- yzo4-w2z7-pp8o-p pz = VERITAS Storage Foundation Standard HA = PERMANENT
= 3EZU-3YK6-92TM- XJ6P-PZCN-PRDP-Z = VERITAS File System = PERMANENT
License Key Product Name License Type
= PZZH-PCWG-CTFZ- I2YC-NPZK-P6 = VERITAS Cluster Server = PERMANENT
Enclosure based naming: a. os independent b. Is based on the enclosure c. can be customized to make names meaningful Traditional based naming: a. operating system dependent b. based on physical connectivity information Solaris: /dev/(r)dsk/c0t0d0s2 SELECTING & NAMING SHCEME: We can select the naming scheme 1. When we run VxVM installation scripts 2. Anytime Anytime by using the the # vxdiskadm option - " change the disk naming scheme " Note: This operation requires the VxVM configuration daemon, 'vxconfigd' to be stopped and restarted Sun Solaris 10 OS/Storage-SVM,VxVM /Cluster
Manickam Kamalakkannan
Sun Solaris 10 Operating System
Page 239
If we choose enclosure based naming 1. Disks are displayed in 3 categories 2. Enclosure: a. Supported by RAID disk arrays are displayed in enclosurename_# -> format b. Disks: Supported JBOD (Just Bunch Of Disks) disk arrays are displayed with the prefix Disk_ c. Others: Disks that do not return a path independent identifier to VxVM displayed in the traditional os based format Output: Ou tput: To Change Change the naming scheme: Select disk devices to add: [,all,list,q,?] list DEVICE Disk_0 Disk_1 Disk_2 Disk_3 Disk_4 Disk_5
DISK -
GROUP -
STATUS online invalid online invalid online online online invalid online
Select disk devices to add: [,all,list,q,?] q
After choosing the option 20: Change the disk naming scheme
Select disk de vices to ad d: [,all,list, [,all,list,q,?] q,?] list
GROUP STATU S online online online online invalid online invalid online invalid
Select disk devices to add: [,all,list,q, [,all,list,q,?] ?]
VXVM user user int erfaces: 1. Supports 3 user interfaces a. VEA – A GUI based, that provides access through, icons, menus, wizards and dialog boxes. b. CLI - UNIX utilities that invoked from the command line c. vxdiskadm - a menu driven, text based interface also invoked from the command line Output: # vxdiskadm Volume Manager Support Operations Menu: VolumeManager/Disk 1 2 3 4
Add or initialize one or more disks Encapsulate one or more disks Remove a disk Remove a disk for replacement
Replace a failed or removed disk Mirror volumes on a disk Move volumes from a disk Enable access to (import) a disk group Remove access to (deport) a disk group Enable (online) a disk device Disable (offline) a disk device Mark a disk as a spare for a disk group Turn off the spare flag on a disk Unrelocate subdisks back to a disk Exclude a disk from hot-relocation use Make a disk available for hot-relocation use Prevent multipathing/Suppress devices from VxVM's view Allow multipathing/Unsuppr multipathing/Unsuppress ess devices from VxVM's view List currently suppressed/non-multipathed devices Change the disk naming scheme Get the newly connected/zoned disks in VxVM view Change/Display the default disk layouts Mark a disk as allocator-reserved for a disk group Turn off the allocator-reserved flag on a disk List disk information
? ?? q
Display help about menu Display help about the menuing system Exit from menus
Select an operation to perform:
NOTE: vxdiskadm only provides access to certain disk and disk group management functions. COMMANDS: 1. # vxdiskadm - used to add or initialize one or more disks, encapsulate one or more disks, remove disk, remove a disk for replacement, replace a failed or removed disk, more volumes from a disk, enable access (import) to a disk group, remoce access (deport) to a disk group, enable (online) a disk device, disable (offline) a disk device, mark a disk as a spare for a disk group, turnoff the spare flag on a diks , list disk informations. 2. # vxassist - Utility used to create volume, add mirrors & logs to exi sting volumes, volumes, exten & shrink the existing volumes, provides the migration of data from a specified set of disks & provides facilities for the online backup of existing volumes. SYNTAX: # vxassist