Tuesday, May 18, 2010

Motherboard Replacement - Cluster

==Action Plan: Replacing Motherboard in cluster environment==
Goal: You will be replacing the motherboard for the system named XXX with serial number: XXX.
Impact: This is a non-disruptive action plan only if the correct parts are removed.
Please contact NetApp support immediately if there are any questions or concerns regarding this action plan.
System Downtime: 0 minutes
Plan Duration: 60 minutes
These steps assumes that the system with the faulty motherboard is up and running.
Laptop with console cable is required to perform this action plan.

BEGIN ACTION PLAN
On the partner head
1. If cluster is not in takeover, fail cluster over to the filer that does not have the bad motherboard.
filer2> cf takeover [wait for takeover to complete]
2. Gather the information about FC ports
filer2> partner fcadmin config [save the fc configuration information for later use in step 7]
On the head with bad motherboard
3. Replace motherboard
4. Move NVRAM card and any other additional cards from old motherboard to new one
5. Interrupt the boot process, at the firmware prompt, set the partner id variable;
LOADER> setenv partner-sysid
6. Boot into maintenance mode
7. Display the current setting of the FC ports (initiator vs target) and compare with output
from step 2.
filer1> fcadmin config
8. Set the correct FC port to target vs initiator.
filer1> fcadmin config -t target
AND
filer1> fcadmin config -t initiator
9. Verify FC port config and ensure that correct ports are set to initiator vs target:
filer1> fcadmin config
NOTE: The fcadmin config should be set the same as output from step 2.
10. Verify that you can see the correct amount of drives from maintenance mode.
11. Halt and boot into 'waiting for giveback
filer1> halt
LOADER> bye
On the partner head
12. Perform cf giveback
filer2> cf giveback
END ACTION PLAN

http://www.netapp-net2.com/documents/3/54/MB-FAS3040-70-SA300-006cfp.pdf

Wednesday, March 17, 2010

Flexshare


Flexshare - Dynamically prioritize storage traffice at the volume level.

Three Independent, Tunable Parameters

Each workload in your storage environment has a different level of importance to your business, so a "one-size-fits-all" approach to storage resource allocation doesn't make sense. FlexShare offers three parameters that can be configured independently on each volume to tune your storage to the needs of every application:

1. Relative priority. Assign a priority from "VeryLow" to "VeryHigh" to each volume. A higher priority gives a volume a greater percentage of available resources when a system is fully loaded. If higher priority applications aren't busy, lower priority applications can use available resources without limitation.

2. User versus system priority. Prioritize user workloads (application and end-user traffic) versus system work (backup, replication, and so on) or vice versa. For instance, give your OLTP application priority over SnapMirror®.

3. Cache utilization. Configure the cache to retain data in cache or reuse the cache depending on workload characteristics. Optimizing cache usage can significantly increase performance for data that is frequently read and/or written.

Filer> priority
The following commands are available; for more information
type "priority help "
delete off set show
help on

Filer> priority show
Priority scheduler is stopped.

Priority scheduler system settings:
io_concurrency: 8

Filer> priority on
Priority scheduler starting.
Filer> Wed Mar 17 14:46:38 EDT [Filer: wafl.priority.enable:info]: Priority scheduling is being enabled

Valid level and system options include:

  1. VeryHigh
  2. High
  3. Medium
  4. Low
  5. VeryLow
Filer> priority set volume vol1 level=High
Filer> priority show volume -v vol1
Volume: vol1
Enabled: on
Level: High
System: Medium
Cache: n/a

Filer> priority delete volume vol2
Filer> priority show volume vol2
Unable to find priority scheduling information for 'vol2'

Below is a sample output of the FlexShare counters:

NetApp1*> stats show prisched
prisched:prisched:queued:0
prisched:prisched:queued_max:5

NetApp1*> stats show priorityqueue
priorityqueue:vol1:weight:76
priorityqueue:vol1:usr_weight:78