Wednesday, March 17, 2010

Flexshare


Flexshare - Dynamically prioritize storage traffice at the volume level.

Three Independent, Tunable Parameters

Each workload in your storage environment has a different level of importance to your business, so a "one-size-fits-all" approach to storage resource allocation doesn't make sense. FlexShare offers three parameters that can be configured independently on each volume to tune your storage to the needs of every application:

1. Relative priority. Assign a priority from "VeryLow" to "VeryHigh" to each volume. A higher priority gives a volume a greater percentage of available resources when a system is fully loaded. If higher priority applications aren't busy, lower priority applications can use available resources without limitation.

2. User versus system priority. Prioritize user workloads (application and end-user traffic) versus system work (backup, replication, and so on) or vice versa. For instance, give your OLTP application priority over SnapMirror®.

3. Cache utilization. Configure the cache to retain data in cache or reuse the cache depending on workload characteristics. Optimizing cache usage can significantly increase performance for data that is frequently read and/or written.

Filer> priority
The following commands are available; for more information
type "priority help "
delete off set show
help on

Filer> priority show
Priority scheduler is stopped.

Priority scheduler system settings:
io_concurrency: 8

Filer> priority on
Priority scheduler starting.
Filer> Wed Mar 17 14:46:38 EDT [Filer: wafl.priority.enable:info]: Priority scheduling is being enabled

Valid level and system options include:

  1. VeryHigh
  2. High
  3. Medium
  4. Low
  5. VeryLow
Filer> priority set volume vol1 level=High
Filer> priority show volume -v vol1
Volume: vol1
Enabled: on
Level: High
System: Medium
Cache: n/a

Filer> priority delete volume vol2
Filer> priority show volume vol2
Unable to find priority scheduling information for 'vol2'

Below is a sample output of the FlexShare counters:

NetApp1*> stats show prisched
prisched:prisched:queued:0
prisched:prisched:queued_max:5

NetApp1*> stats show priorityqueue
priorityqueue:vol1:weight:76
priorityqueue:vol1:usr_weight:78