NetApp OnCommand Performance Manager

New software adds control, monitoring

NetApp recently introduced OnCommand Performance Manager, new software to monitor and troubleshoot a customer’s Data ONTAP environment.

“With OnCommand,” the company states, “enterprises of all sizes can better enable, control, automate, and analyze a cost-efficient storage management infrastructure. With Data ONTAP at their core, OnCommand tools help IT optimize storage utilization, meet service-level agreements, minimize risk, boost performance, and deliver nondisruptive operations.”

The company also announced updates to other components of the OnCommand portfolio designed to provide faster IT service delivery through improved storage management, automation, and protection.

Noting the importance of storage and data management in the transition to software-defined service delivery, a company press release emphasized how OnCommand enables organizations “to monitor trends in their data center, understand performance goals and the attainment of those goals, and improve control over their storage environment.”

On a more granular level, “Performance Manager gives storage administrators the ability to know whether a cluster or volume is experiencing a performance issue and then identifies which resources are impacting volume response times,” the company stated. “The limits of ‘normal’ performance are automatically calculated and continually adjusted based on workload patterns. This new insight allows the identification of abnormal behavior and the correlation of events across multiple affected volumes to quickly identify the source of the problem, saving organizations time and money. Performance Manager can be operated from within OnCommand’s Unified Manager or as a standalone application.”

In related news, techworld.com reported on June 15 that NetApp had announced a new version of its OnCommand management software that allowed its storage arrays to plug into most leading cloud management systems shared across an IT infrastructure.

netapp.com

techworld.com

No Comments »

Innovation Awards Honor ‘Transformative IT’

NetApp lauds customer breakthroughs

NetApp handed out its annual Innovation Awards earlier this month, honoring its “visionary enterprise customers [who] are successfully navigating through the most complex period in IT with the help of NetApp” and its partners, in the words of Rob Salmon, NetApp President and Head of Go-to-Market Operations.

The winners were selected from more than 100 nominations received from over two dozen countries. The awards ceremony took place on June 4, 2014, at The Computer History Museum in Mountain View, California.

Symantec won in the “Go Beyond” category, which recognizes innovation in cloud services. “We worked closely with NetApp and their partners to create an on-demand software-defined data center based on a NetApp FAS shared storage infrastructure, clustered Data ONTAP, and OnCommand technologies,” slashing the time to build data center environments from days to minutes said Sheila Jordon, senior vice president and CIO.

Innovator of the Year honors (Americas division) were awarded to ION Geophysical Corporation, which helps oil and gas companies “better understand the earth’s subsurface through innovative seismic data processing and imaging solutions,” according to Mill Menger, director of High-Performance Computing. Growth required the company to move its entire computing environment into a new facility. “Using NetApp’s operating system — clustered Data ONTAP — and FAS storage systems, we were able to transition our data to the new data center without disruption to operations,” said Menger.

The State of California Natural Resources Agency won in the Pioneer category (innovative small to mid-size companies), Public Sector. Its director of IT, Tim Garza, explained how the agency “depends on real-time data and systems to make informed decisions in the management of critical resources and respond quickly to natural disasters.” The agency was able to meet growing demand with fewer resources through a shared-services cloud-computing model built on industry technologies that utilize clustered Data ONTAP. “This enabled us to consolidate 28 siloed data centers and decrease overall capital and operational expenditures by 45% and 35%, respectively,” noted Garza. “Now we are able to operate at the speed of business without technology being a constraint.”

See the complete list of winners at netapp.com

No Comments »

Data Plays Star Role in Growth of World Cup

Mobile tech, global interest fuel surge

In 1998, the World Cup was held in France. Eighty-thousand spectators attended the event, many with a mobile phone such as the Nokia 5110, with which they could send and receive text messages that probably consumed a total of about 2MB of bandwidth (roughly the equivalent of loading an image-heavy Web page).

But, as a blog post and infographic on netapp.com reveal, the data traffic spawned by World Cups since then has exploded. By 2006, advances in mobile technology had added MMS, email, and some Web traffic to the “spectator bandwidth” mix; the 69,000 spectators in Germany would have used 30GB of bandwidth. And for the first time, portions of the action were streamed online — 125 million times.

At this year’s World Cup in Brazil, the 73,531 spectators at the final match are projected to unleash over 12 terabytes of data, if each shares a one-minute HD video. In addition, 79% of viewers in the U.S. are expected to watch the games live online at home, on mobile devices, and at work.

The blog post also notes that anti-terrorism efforts utilizing facial recognition headsets, high-speed WiFi, and powerful data centers will add another layer of heavy data transmission.

By the 2022 World Cup in Qatar, wearable devices like biometric monitoring devices and eyeglass-mounted or hat-mounted computers will be commonplace, and the 86,250 spectators will likely consume 1.3 petabytes of bandwidth. A “media membrane” will wrap the outside of the stadium and show match coverage — which sounds cool until you consider Japan’s plan (had it won the right to host the Cup) to broadcast the matches in real-time to stadiums around the world as 3D holographic projections.

communities.netapp.com

No Comments »

SnapVault on Clustered Data ONTAP 8.2

So with cDot 8.2 we got SnapVault back! Yeah!

That being said I thought I would walk you all through how to configure it, since I think some of the settings are not all that intuitive.

I will be doing this using two data vservers on two separate clusters. You could set up SnapVault between data vservers within the same cluster but often if we have the resources we like to archive to a different location.

 

Overview of Necessary Steps

    1. Create a data vserver on the destination cluster with DP volumes
    2. Create intercluster lifs on all nodes in both clusters
    3. Configure a peer relationship between the two clusters
    4. Configure a peer relationship between the source and destination vserver
    5. Make sure the SnapMirror labels are specified in the appropriate SnapShot policy on the Source cluster
    6. Apply that SnapShot Policy to the Source volume
    7. Create a Vault Policy on the Destination data vserver specifying the number of each type of SnapShot that you wish to retain on the destination (based upon the SnapShot label matching the SnapMirror label specified on the Source)
    8. On the Destination data vserver define a SnapVault relationship between the source volume and the destination volume specifying the replication schedule and the Vault Policy require

 

1. Create a Data VServer on the Destination Cluster with DP Volumes

First things first, of course, we need someplace to SnapVault to. So we create a vserver on the destination cluster and at least one volume, other than the root volume. The volume type should be DP. This will restrict access to the volume. This is the same as I would do if I were setting up a volume to be SnapMirrored to. Here you can see the Create Volume dialog box in OnCommand System Manager. The data vserver has already been created I am just adding in an additional volume and select “Data Protection” as the Storage Type. (Remember if you don’t see certain options in System Manager it is often because you haven’t licensed your systems correctly.)

Create Volume Dialog Box in OnCommand System Manager

Click image for larger version

 

2. Create Intercluster Lifs on All Nodes in Both Clusters

Best practice is to have at least one intercluster lifs per node in each cluster. You create them through the command line using “net int create”. Role type must be specified as “intercluster” and the role of the ports that you assign the lifs to must be either “intercluster” or data. All intercluster lifs on all nodes within each cluster must be able to communicate with each other. This is referred to as a Full Mesh topology. So make sure ip addresses/subnets/vlans are configured appropriately.

Here are the commands I ran on each one of my clusters to create the appropriate intercluster lifs.

Cluster1:

Click image for larger version

Click image for larger version

Cluster2:

Click image for larger version

Click image for larger version

 

3. Configure a Peer Relationship between the Two Clusters

In order to configure the peer relationship through System Manager after you have created the intercluster lifs you might have to fully refresh the connection through System Manager because it might not see the fact that those intercluster lifs have been created automatically.

Technically if you use System Manager it will not make a difference which cluster you connect to in order to define the peer relationship. Once you create it on one cluster it will create it going the other way automatically.

You will find the Peer Create option under the Cluster tab on the left side. For my demo I connected to cluster2 in order to define the Peer Relationship, but as you will see once I define it and connect to cluster1 I will see the relationship there as well.

Click image for larger version

Click image for larger version

 

You should see the intercluster lifs you have created for whichever cluster you have connected to on the left. On the right the dropdown should have the name of the other cluster you have created the intercluster lifs for, the cluster you want to set up a peer relationship with. If you don’t see the name of the other cluster in the dropdown make sure your intercluster lifs were created correctly and you can ping from nodes on one cluster to the nodes on the cluster. Also don’t forget to refresh System Manager after you define the lifs.

Click image for larger version

Click image for larger version

 

Once you select the Remote cluster from the drop down click authenticate and type in administrative credentials for the other cluster.

Click image for larger version

Click image for larger version

 

Once you successfully authenticate with the other cluster you should see its intercluster lifs listed as well.

Image 6

Click image for larger version

 

4. Configure a Peer Relationship between the Source and Destination VServer

Technically you can also do this when you define the SnapVault relationship through OnCommand System Manager, so you don’t have to necessarily do this before you complete the rest of the steps. But – here is the command anyway:

Click image for larger version

Click image for larger version

 

5. Make Sure the SnapMirror Labels Are Specified In the Appropriate Snapshot Policy on the Source Cluster

Now comes the tricky and not so intuitive part. In order to configure the SnapVault relationship to retain a certain number of particular snapshots on the destination vserver you will have to specify SnapMirror labels within the SnapShot policy used on the Source. You can access the SnapShot Policies through the Cluster tab on the left, under Configuration.

This is a screenshot of the SnapShot Policy on my source cluster that I later applied to my source volume. The SnapMirror Labels I will be using to reference particular SnapShots within my Vault Policy in the Step 7 are listed in the column on the right.

Click image for larger version

Click image for larger version

 

6. Apply That Snapshot Policy to the Source Volume

I need to make sure the SnapShot policy I have modified on my source cluster is applied to my source volume.

Click image for larger version

Click image for larger version

 

7. Create a Vault Policy on the Destination Data VServer

And now back to the destination cluster. Here is where my SnapVault Policy (Vault Policy) will match labels with the SnapMirror Policies I specified in Step 5. Under my destination vserver I will see Policies and, nested underneath, Protection Policies. I need to create a new Vault Policy. What I specify in the Snapshot Copy Label column has to match the SnapMirror labels I set in Step 5. So what I am saying here is that I want to keep 20 of the daily snapshots on the destination and 10 of the hourly snapshots.

Click image for larger version

Click image for larger version

 

8. Define a SnapVault Relationship

Finally I can define and initialize the SnapVault relationship. Under my destination vserver I can go to Protection and create a new Vault relationship. Choosing the appropriate source resources and either choosing or creating a destination volume. This is also where it will automatically create a peer relationship with the source vserver if you have not already done so through the command line.

The Vault policy option will reference the Vault policy you created in Step 7 and the schedule you choose will determine how often changes will be transferred to the destination volume. Once you fill in all the options you can automatically have it initialize the relationship when you click Create and the SnapVault is fully configured.

Click image for larger version

 


Author Spotlight: Unitek Education Instructor Alicia T.

Instructor Alicia T. has taught at Unitek Education for nearly nine years. She brings two decades of teaching experience to her classroom, as well as prior experience as an IT consultant. Alicia teaches students the skills they need to master NetApp software and hardware while emphasizing connections between various tools and platforms to maximize student proficiency. Alicia offers developing IT professionals the ability to master the skills they need to succeed in the field.


Reinforce Your Data ONTAP Clustering Knowledge with Unitek Education

Unitek Education’s NetApp Clustered Data ONTAP 8.2 Administration is a 5-Day instructor-led course focused on the evolution of Data ONTAP clustering. The course will teach you how to use new features including QOS, data-in-place online controller, and System Manager while discussing scalability, the architecture and functionality of a Data ONTAP cluster, and how to install, configure, manage, and troubleshoot a Data ONTAP cluster.

No Comments »

NetApp CTO Offers 10 Tech Predictions

Hybrid cloud, all-flash startups to flourish

On the NetApp 360 Blog, NetApp’s Jay Kidd offered up his forecast for 2014, identifying two main themes: that IT will warm up to full adoption of the hybrid cloud, and that adoption will accelerate for certain emerging technologies like all-flash arrays.

“With a ‘some but not all’ approach to the cloud, IT teams will more strategically assess deployment options for their applications,” Kidd says. And he expects that competition between mainstream storage companies and startups pinning their future on all-flash arrays will heat up.

Kidd’s 10 predictions for 2014:

  1. Hybrid Clouds Become the Dominant Vision for Enterprise IT
  2. Hunger Games Begin for All-Flash Startups
  3. If You Work in IT, You Are a Service Provider
  4. Reality Versus Hype Becomes Clear Around Software-Defined Storage
  5. Storage Virtual Machines Enable Data Mobility and Application Agility
  6. OpenStack Survives the Hype, Moves Beyond Early Adopters
  7. Questions on Data Sovereignty Impact Private and Public Storage
  8. 40GbE Adoption Takes Off in the Data Center
  9. Big Data Evolves from Analyzing Data You Have to Driving the Collection of New Data
  10. Clustered Storage, Converged Infrastructure, Object Storage, in-Memory Databases All Continue Their Momentum in 2014

Visit the blog to get Kidd’s liner notes for each prediction.

Communities.netapp.com

No Comments »

Innovation Stack Alters Data Center Paradigm

Virtualized model links hardware, cloud

On his list of predictions for 2014 (see article below), NetApp CTO Jay Kidd claims that converged infrastructure “will become the most compelling building block of data center infrastructure.”

eWeek’s Chris Preimesberger and NetApp created a slideshow that highlights the reasons why 2014 may be the year that converged infrastructure — or the Innovation Stack — takes off. The new paradigm offers a flexible, scalable, and agile model for making new and existing IT hardware more efficient and cloud-ready.

eWeek defines the innovation stack as “a heterogeneous infrastructure that combines IT from best-of-breed vendors and/or open-source communities at each layer to deliver a sophisticated architectural model that addresses the full needs of the data center.”

“New business pressures, such as the need for real-time information and collaborative applications, are putting the effective storage and accessibility of business data at the center of the new IT,” Preimesberger writes. “This, naturally, is requiring more agility from IT systems; the newer ones generally include this, but the older ones usually need upgrades.“

The six layers of the innovation stack are storage media; storage; network; processing; business logic; and presentation. In the new stack, these layers are all implemented as services that manage their respective dynamic resource pools rather than statically defined virtual machines.

The article points to Amazon Web Services (AWS) as a prime example of the innovation stack, as it includes an extensive layer of services for developers, including multiple database options, and supports multiple programming languages with intuitive management tools.

Eweek.com

No Comments »

World’s Largest Laser Runs on Data ONTAP

Downtime not an option for research tool

The world’s largest laser — 100 times more energetic than any previous laser system — resides at Livermore National Laboratory’s National Ignition Facility (NIF). Scientists are investigating defense and energy applications with the laser.

As Mike McNamara writes on NetApp’s blog, downtime is not an option for the system, so NIF retired most of its legacy storage and deployed NetApp FAS3250 and FAS3220 storage systems running the clustered Data ONTAP operating system to provide nondisruptive operations.

“Each time the laser is fired at a target, nonrelational object data produced by scientific instruments (about 50TB per year) is captured in files on network-attached storage, which must be accessible 24/7 for physicists to analyze,” McNamara writes. “Algorithms then generate representations of the x-rays, plasmas, and other scientific phenomena that are stored as relational data in Oracle databases.

“An eight-node NetApp cluster stores the virtual machine operating system images, while a four-node NetApp cluster stores scientific data in Hierarchical Data Format (HDF) to be ingested to Oracle SecureFiles. 800 Linux virtual machines connect to the NetApp NFS cluster over a 10GbE network.

“NetApp’s unified scale-out architecture allowed NIF to maintain constant availability for very large amounts of data. NIF anticipates eliminating up to 60 hours of planned downtime annually, maximizing facility availability. In addition, all of the NetApp storage systems can be managed as a single logical pool that can seamlessly scale to tens of petabytes and thousands of volumes.”

Communities.netapp.com

No Comments »

NetApp CEO: Data Management is the Future

In Q&A, NetApp CEO looks ahead

Rob Preston of InformationWeek interviewed NetApp CEO Tom Georgens recently, eliciting a wide range of insights into how Georgens sees the storage industry changing — and where NetApp will find competitive advantage.

In the preface to his interview Preston observes how it is both the best of times and the worst of times for the storage industry: data volumes are soaring and storage budgets are growing as well, but the aggregrate growth rate of storage vendors is weak to negative due to the emergence of cheaper technology alternatives and the reluctance of many cash-strapped potential customers to invest while the landscape is in such flux.

Here are some of Georgens’ thoughts, as related by Preston:

“There’s a transition of IT professionals from being effectively owner-operators to figuring out the role of external services, whether it’s software-as-a-service, traditional enterprise services, or hyper-scale services … [and becoming] integrators/brokers.”

“Our view in the end is that NetApp is an integrator of technologies that deliver customer solutions. I view flash, the cloud, disk drives, DRAMs, processor technology, as just things that integrate into a data management scheme… The more things that are managed by Data ONTAP is money in the bank for us …”

“Over time, the primary differentiator will be integrated systems. I think we need to take a step back on software-defined. Ultimately, what software-defined is is a set of common data services that can manage all the storage in my enterprise. From NetApp’s point of view with ONTAP, we’ve been on that kick all along, long before we called it software-defined. We didn’t build five different products. We built one product. Whether it’s SAN or NAS or backup or archiving or whatever, it’s all managed by Data ONTAP. So the fundamental premise that customers want a single set of data management throughout all different data types — I truly buy that.”

informationweek.com

No Comments »

NetApp Launches New ‘Scale-out Innovations’

But some wonder, where’s FlashRay?

NetApp announced on Feb. 19 the launch of “new unified, scale-out innovations” that let customers manage their data through a single storage and data management platform. These innovations include the FAS8000 enterprise storage array series, which can be deployed in traditional data centers as well as across newer hybrid cloud platforms, and FlexArray, the company’s new virtualization software that enables NetApp’s enterprise to virtualize and manage third-party arrays.

The new products reflect NetApp’s ongoing evolution from “storage company” to “data management” vendor. The FAS8000 range of enterprise arrays support Network Attached Storage, Storage Area Network, Fiber Channel, and Ethernet all on the same platform for versatile access. And FlexArray enables users to virtualize other storage systems such as EMC, Hitachi, or NetApp’s E-series on the FAS8000 arrays. “An integration of this sort is likely to push sales of the company’s product among first-time buyers as well as among clients looking to upgrade their existing sytems,” suggests an article on Forbes.com.

But Dave Raffo, in an article on techtarget.com, notes the conspicuous absence of an all-flash storage array — something NetApp’s chief competitors already have on the market and are pushing hard. FlashRay, NetApp’s long-awaited all-flash storage array, is scheduled to be released later this year, though a more specific date has not been announced by NetApp.

According to Raffo, “FlashRay is the last to market because NetApp designed it from the ground up instead of buying a flash startup (as EMC and IBM did) or put flash into an existing platform (the approach taken by Hitachi Data Systems, Hewlett-Packard and Dell). NetApp’s goal is to come up with a different operating system than Data OnTap that is used for FAS storage, while keeping all the storage and data management capabilities on OnTap.”

While NetApp’s strategy might be making some impatient, others believe the company’s approach will pay off in the long run.

Forbes.com

techtarget.com

NetApp press release:

No Comments »

Verizon Deal to Extend Reach of Data ONTAP

NetApp’s presence in the cloud will increase, thanks to a recently announced partnership with Verizon. In an article on crn.com, writer Joseph F. Kovar says that NetApp customers will be able to access NetApp Data ONTAP as a virtual storage appliance on the Verizon cloud, giving them the same management capabilities and features they currently have with their on-premise NetApp hardware.

Because the Data ONTAP-based virtual appliance uses the same ONTAP found in NetApp’s physical appliances, says John Considine, CTO for Verizon Terremark, Verizon clients “can connect their cloud storage to their physical storage. They can do snapshots, replication, archiving, anything they do in their data centers, all managed with the same pane of glass.”

Industry experts commended the partnership. John Woodall, vice president of engineering at Integrated Archive Systems, a long-time NetApp partner, drew an analogy between NetApp’s increasing pervasiveness and the “Intel Inside” campaign: “This is becoming ‘NetApp Everywhere,’” he said. “NetApp ONTAP is the most common storage operating system in the world. The partnership with Verizon Cloud is another way to leverage that technology to extend data center management services from the data center to the cloud.”

Currently in beta, the Verizon Cloud is scheduled to go into production sometime in 2014. The NetApp Data ONTAP virtual storage appliance is expected to go live at the same time, according to Verizon Terremark’s Considine.

crn.com

New Cloud-Based Data Protection for NetApp Storage

In other industry news, Infrastructure-as-a-Service provider Artisan Infrastructure and Arrow Electronics recently unveiled a cloud-based data protection service for NetApp customers. John Austin of Arrow Electronics, in an article on crn.com, says the service primarily targets hardware-focused solution providers “who are struggling … to figure out how to get into the cloud and subscription business.”

“We are working with Arrow to help VARs stay viable as customers move to cloud services,” said Brian Hierholzer, CEO of Artisan. Solution providers partnering with NetApp and Arrow can now leverage Artisan’s cloud storage to offer customers data protection via the same NetApp tools they already use, Hierholzer said.

crn.com

No Comments »

Older Entries »