CA Agency Finds Savings, Efficiencies in Cloud

Teams with NetApp to revamp architecture

“We were in dire straits,” recalls Tim Garza, IT director for the California Natural Resources Agency (CNRA). “Without a fundamental change in our service architecture, we would have been out of the game.”

The year was 2008, and state agencies throughout California faced severe budget cuts. With 29 departments and responsibility for California’s coastlines, parks, fish, wildlife, energy sources, and water, the CNRA needed to radically change the way it operated in order to fulfill its mission.

State CIO Carlos Ramos developed a vision to slash IT costs and improve efficiency by having each department share a common IT infrastructure. Today, thanks in part to its partnership with NetApp, the CNRA estimates that it has reduced its capital costs by 42% and sped up service delivery by 70%.

According to Rob Salmon, president of NetApp, “The State of California Natural Resources Agency has become the model for how you deploy cloud services in government. Their vision, combined with NetApp’s industry-leading data management portfolio and in collaboration with other virtualization technology partners, is helping a state with the world’s eighth largest economy better manage scarce natural resources during a time when attention to conservation is paramount.”

The CNRA depends on real-time data access to make informed decisions and respond quickly to disasters such as flood, fire, drought, and earthquakes. The multi-tenant private cloud solution that NetApp implemented — consisting of the clustered Data ONTAP operating system, OnCommand System Manager, and FAS hybrid storage systems — delivered the necessary capabilities.

“NetApp understood where we were going, the opportunity with cloud, and delivered the storage technology to enable our vision as a business enabler, not a technology shop,” said Garza. “NetApp provided the multi-tenant, share services storage infrastructure and functions to support the increased data demands on resources within our agency.”

Press release:

Video overview:

Case study (requires registration):

No Comments »

Confessions of a NetApp Advocate

A fan offers three reasons why NetApp rocks

“My name is Jesse, and I’m a NetApp Advocate. What does it mean to be a NetApp Advocate? It means I talk about NetApp so much someone at NetApp noticed and gave me a title.”

With that candid introduction, Jesse Anderson, a post-sales systems engineer at Dynamix Group, launches into a paean to all things NetApp — in particular, its innovation, service and support, and culture. In Anderson’s view, it is this “holy trinity” that enables his company to “deliver solutions that are good for our customers, and great for our relationships with those customers.”

As Dynamix Group is a relatively new NetApp partner, Anderson finds himself in the interesting position of advocating NetApp to both his customers and his internal teams. It’s a challenge he tackles with enthusiasm.

Innovation: Anderson applauds NetApp for “constantly pushing the boundaries of what we can do on the storage platform.” Among the products he is most excited to introduce to his customers — and the reasons why — are:

EF550 Flash Array — “Leaps and bounds ahead of the competition in terms of performance, reliability, and speed.”

FAS8000 — “FAS is all about simplicity, versatility, and efficiency … to get the same functionality from other vendors you’d need multiple devices with different licenses.”

E-Series — “Impressive sustained bandwidth and IOPS, with unique performance boost … bottom line is it works really, really well.”

Clustered Data ONTAP — “The ultimate no-downtime, scale-out and scale-up architecture.”

As storage equipment should have a shelf life of 3-5 years, Anderson points out the importance of partnering with a company that is easy to work with and responsive to its customers’ support needs. NetApp support has been “phenomenal,” Anderson says.

To illustrate NetApp’s positive culture, Anderson includes a picture in his post of himself and other advocates displaying their shaved heads as part of a fundraiser for childhood cancer research that NetApp supports.

communities.netapp.com

No Comments »

NetApp CEO Promises FlashRay in September

All-flash solution completes flash portfolio

NetApp will begin shipping its long-awaited FlashRay storage solution next month, CEO Tom Georgens promised during a conference call with analysts on Aug. 14. According to David Raffo of searchstorage.com, Georgens also talked about the next release of Clustered Data OnTap, which will include features that did not make it into previous releases such as MetroCluster for high availability.

Georgens said that FlashRay, a new all-flash storage array designed from the ground up to be optimized for high-performance flash technology, is currently being tested by customers and will ship starting next month.

FlashRay will be part of a comprehensive flash storage portfolio that also includes the EF-series of all-flash arrays based on NetApp’s E-series architecture, as well as all-flash versions of the company’s FAS-series.

“The total flash portfolio has been a significant part of our momentum,” Georgens said.

Adding FlashRay to its flash storage lineup will give NetApp the product line it needs to compete with the many startups in the market such as Pure Storage and Nimble Storage, as well as with the latest offerings from established players like EMC, an industry expert told CRN.com.

CRN notes that a September release will get FlashRay to market well ahead of the NetApp Insight technical conference in October, which for the first time will include a user meeting this year. Unitek Education is a Gold Sponsor for the event, which has traditionally focused on technical training for NetApp’s internal team and its channel partners.

“A lot of our partners will be there to get training,” Georgens noted. “That will spill over to the users’ group meeting. We’ll see a lot of synergy by having the two together.”

crn.com

itknowledgeexchange.techtarget.com

No Comments »

NetApp OnCommand Performance Manager

New software adds control, monitoring

NetApp recently introduced OnCommand Performance Manager, new software to monitor and troubleshoot a customer’s Data ONTAP environment.

“With OnCommand,” the company states, “enterprises of all sizes can better enable, control, automate, and analyze a cost-efficient storage management infrastructure. With Data ONTAP at their core, OnCommand tools help IT optimize storage utilization, meet service-level agreements, minimize risk, boost performance, and deliver nondisruptive operations.”

The company also announced updates to other components of the OnCommand portfolio designed to provide faster IT service delivery through improved storage management, automation, and protection.

Noting the importance of storage and data management in the transition to software-defined service delivery, a company press release emphasized how OnCommand enables organizations “to monitor trends in their data center, understand performance goals and the attainment of those goals, and improve control over their storage environment.”

On a more granular level, “Performance Manager gives storage administrators the ability to know whether a cluster or volume is experiencing a performance issue and then identifies which resources are impacting volume response times,” the company stated. “The limits of ‘normal’ performance are automatically calculated and continually adjusted based on workload patterns. This new insight allows the identification of abnormal behavior and the correlation of events across multiple affected volumes to quickly identify the source of the problem, saving organizations time and money. Performance Manager can be operated from within OnCommand’s Unified Manager or as a standalone application.”

In related news, techworld.com reported on June 15 that NetApp had announced a new version of its OnCommand management software that allowed its storage arrays to plug into most leading cloud management systems shared across an IT infrastructure.

netapp.com

techworld.com

No Comments »

Innovation Awards Honor ‘Transformative IT’

NetApp lauds customer breakthroughs

NetApp handed out its annual Innovation Awards earlier this month, honoring its “visionary enterprise customers [who] are successfully navigating through the most complex period in IT with the help of NetApp” and its partners, in the words of Rob Salmon, NetApp President and Head of Go-to-Market Operations.

The winners were selected from more than 100 nominations received from over two dozen countries. The awards ceremony took place on June 4, 2014, at The Computer History Museum in Mountain View, California.

Symantec won in the “Go Beyond” category, which recognizes innovation in cloud services. “We worked closely with NetApp and their partners to create an on-demand software-defined data center based on a NetApp FAS shared storage infrastructure, clustered Data ONTAP, and OnCommand technologies,” slashing the time to build data center environments from days to minutes said Sheila Jordon, senior vice president and CIO.

Innovator of the Year honors (Americas division) were awarded to ION Geophysical Corporation, which helps oil and gas companies “better understand the earth’s subsurface through innovative seismic data processing and imaging solutions,” according to Mill Menger, director of High-Performance Computing. Growth required the company to move its entire computing environment into a new facility. “Using NetApp’s operating system — clustered Data ONTAP — and FAS storage systems, we were able to transition our data to the new data center without disruption to operations,” said Menger.

The State of California Natural Resources Agency won in the Pioneer category (innovative small to mid-size companies), Public Sector. Its director of IT, Tim Garza, explained how the agency “depends on real-time data and systems to make informed decisions in the management of critical resources and respond quickly to natural disasters.” The agency was able to meet growing demand with fewer resources through a shared-services cloud-computing model built on industry technologies that utilize clustered Data ONTAP. “This enabled us to consolidate 28 siloed data centers and decrease overall capital and operational expenditures by 45% and 35%, respectively,” noted Garza. “Now we are able to operate at the speed of business without technology being a constraint.”

See the complete list of winners at netapp.com

No Comments »

Data Plays Star Role in Growth of World Cup

Mobile tech, global interest fuel surge

In 1998, the World Cup was held in France. Eighty-thousand spectators attended the event, many with a mobile phone such as the Nokia 5110, with which they could send and receive text messages that probably consumed a total of about 2MB of bandwidth (roughly the equivalent of loading an image-heavy Web page).

But, as a blog post and infographic on netapp.com reveal, the data traffic spawned by World Cups since then has exploded. By 2006, advances in mobile technology had added MMS, email, and some Web traffic to the “spectator bandwidth” mix; the 69,000 spectators in Germany would have used 30GB of bandwidth. And for the first time, portions of the action were streamed online — 125 million times.

At this year’s World Cup in Brazil, the 73,531 spectators at the final match are projected to unleash over 12 terabytes of data, if each shares a one-minute HD video. In addition, 79% of viewers in the U.S. are expected to watch the games live online at home, on mobile devices, and at work.

The blog post also notes that anti-terrorism efforts utilizing facial recognition headsets, high-speed WiFi, and powerful data centers will add another layer of heavy data transmission.

By the 2022 World Cup in Qatar, wearable devices like biometric monitoring devices and eyeglass-mounted or hat-mounted computers will be commonplace, and the 86,250 spectators will likely consume 1.3 petabytes of bandwidth. A “media membrane” will wrap the outside of the stadium and show match coverage — which sounds cool until you consider Japan’s plan (had it won the right to host the Cup) to broadcast the matches in real-time to stadiums around the world as 3D holographic projections.

communities.netapp.com

No Comments »

SnapVault on Clustered Data ONTAP 8.2

So with cDot 8.2 we got SnapVault back! Yeah!

That being said I thought I would walk you all through how to configure it, since I think some of the settings are not all that intuitive.

I will be doing this using two data vservers on two separate clusters. You could set up SnapVault between data vservers within the same cluster but often if we have the resources we like to archive to a different location.

 

Overview of Necessary Steps

    1. Create a data vserver on the destination cluster with DP volumes
    2. Create intercluster lifs on all nodes in both clusters
    3. Configure a peer relationship between the two clusters
    4. Configure a peer relationship between the source and destination vserver
    5. Make sure the SnapMirror labels are specified in the appropriate SnapShot policy on the Source cluster
    6. Apply that SnapShot Policy to the Source volume
    7. Create a Vault Policy on the Destination data vserver specifying the number of each type of SnapShot that you wish to retain on the destination (based upon the SnapShot label matching the SnapMirror label specified on the Source)
    8. On the Destination data vserver define a SnapVault relationship between the source volume and the destination volume specifying the replication schedule and the Vault Policy require

 

1. Create a Data VServer on the Destination Cluster with DP Volumes

First things first, of course, we need someplace to SnapVault to. So we create a vserver on the destination cluster and at least one volume, other than the root volume. The volume type should be DP. This will restrict access to the volume. This is the same as I would do if I were setting up a volume to be SnapMirrored to. Here you can see the Create Volume dialog box in OnCommand System Manager. The data vserver has already been created I am just adding in an additional volume and select “Data Protection” as the Storage Type. (Remember if you don’t see certain options in System Manager it is often because you haven’t licensed your systems correctly.)

Create Volume Dialog Box in OnCommand System Manager

Click image for larger version

 

2. Create Intercluster Lifs on All Nodes in Both Clusters

Best practice is to have at least one intercluster lifs per node in each cluster. You create them through the command line using “net int create”. Role type must be specified as “intercluster” and the role of the ports that you assign the lifs to must be either “intercluster” or data. All intercluster lifs on all nodes within each cluster must be able to communicate with each other. This is referred to as a Full Mesh topology. So make sure ip addresses/subnets/vlans are configured appropriately.

Here are the commands I ran on each one of my clusters to create the appropriate intercluster lifs.

Cluster1:

Click image for larger version

Click image for larger version

Cluster2:

Click image for larger version

Click image for larger version

 

3. Configure a Peer Relationship between the Two Clusters

In order to configure the peer relationship through System Manager after you have created the intercluster lifs you might have to fully refresh the connection through System Manager because it might not see the fact that those intercluster lifs have been created automatically.

Technically if you use System Manager it will not make a difference which cluster you connect to in order to define the peer relationship. Once you create it on one cluster it will create it going the other way automatically.

You will find the Peer Create option under the Cluster tab on the left side. For my demo I connected to cluster2 in order to define the Peer Relationship, but as you will see once I define it and connect to cluster1 I will see the relationship there as well.

Click image for larger version

Click image for larger version

 

You should see the intercluster lifs you have created for whichever cluster you have connected to on the left. On the right the dropdown should have the name of the other cluster you have created the intercluster lifs for, the cluster you want to set up a peer relationship with. If you don’t see the name of the other cluster in the dropdown make sure your intercluster lifs were created correctly and you can ping from nodes on one cluster to the nodes on the cluster. Also don’t forget to refresh System Manager after you define the lifs.

Click image for larger version

Click image for larger version

 

Once you select the Remote cluster from the drop down click authenticate and type in administrative credentials for the other cluster.

Click image for larger version

Click image for larger version

 

Once you successfully authenticate with the other cluster you should see its intercluster lifs listed as well.

Image 6

Click image for larger version

 

4. Configure a Peer Relationship between the Source and Destination VServer

Technically you can also do this when you define the SnapVault relationship through OnCommand System Manager, so you don’t have to necessarily do this before you complete the rest of the steps. But – here is the command anyway:

Click image for larger version

Click image for larger version

 

5. Make Sure the SnapMirror Labels Are Specified In the Appropriate Snapshot Policy on the Source Cluster

Now comes the tricky and not so intuitive part. In order to configure the SnapVault relationship to retain a certain number of particular snapshots on the destination vserver you will have to specify SnapMirror labels within the SnapShot policy used on the Source. You can access the SnapShot Policies through the Cluster tab on the left, under Configuration.

This is a screenshot of the SnapShot Policy on my source cluster that I later applied to my source volume. The SnapMirror Labels I will be using to reference particular SnapShots within my Vault Policy in the Step 7 are listed in the column on the right.

Click image for larger version

Click image for larger version

 

6. Apply That Snapshot Policy to the Source Volume

I need to make sure the SnapShot policy I have modified on my source cluster is applied to my source volume.

Click image for larger version

Click image for larger version

 

7. Create a Vault Policy on the Destination Data VServer

And now back to the destination cluster. Here is where my SnapVault Policy (Vault Policy) will match labels with the SnapMirror Policies I specified in Step 5. Under my destination vserver I will see Policies and, nested underneath, Protection Policies. I need to create a new Vault Policy. What I specify in the Snapshot Copy Label column has to match the SnapMirror labels I set in Step 5. So what I am saying here is that I want to keep 20 of the daily snapshots on the destination and 10 of the hourly snapshots.

Click image for larger version

Click image for larger version

 

8. Define a SnapVault Relationship

Finally I can define and initialize the SnapVault relationship. Under my destination vserver I can go to Protection and create a new Vault relationship. Choosing the appropriate source resources and either choosing or creating a destination volume. This is also where it will automatically create a peer relationship with the source vserver if you have not already done so through the command line.

The Vault policy option will reference the Vault policy you created in Step 7 and the schedule you choose will determine how often changes will be transferred to the destination volume. Once you fill in all the options you can automatically have it initialize the relationship when you click Create and the SnapVault is fully configured.

Click image for larger version

 


Author Spotlight: Unitek Education Instructor Alicia T.

Instructor Alicia T. has taught at Unitek Education for nearly nine years. She brings two decades of teaching experience to her classroom, as well as prior experience as an IT consultant. Alicia teaches students the skills they need to master NetApp software and hardware while emphasizing connections between various tools and platforms to maximize student proficiency. Alicia offers developing IT professionals the ability to master the skills they need to succeed in the field.


Reinforce Your Data ONTAP Clustering Knowledge with Unitek Education

Unitek Education’s NetApp Clustered Data ONTAP 8.2 Administration is a 5-Day instructor-led course focused on the evolution of Data ONTAP clustering. The course will teach you how to use new features including QOS, data-in-place online controller, and System Manager while discussing scalability, the architecture and functionality of a Data ONTAP cluster, and how to install, configure, manage, and troubleshoot a Data ONTAP cluster.

No Comments »

NetApp CTO Offers 10 Tech Predictions

Hybrid cloud, all-flash startups to flourish

On the NetApp 360 Blog, NetApp’s Jay Kidd offered up his forecast for 2014, identifying two main themes: that IT will warm up to full adoption of the hybrid cloud, and that adoption will accelerate for certain emerging technologies like all-flash arrays.

“With a ‘some but not all’ approach to the cloud, IT teams will more strategically assess deployment options for their applications,” Kidd says. And he expects that competition between mainstream storage companies and startups pinning their future on all-flash arrays will heat up.

Kidd’s 10 predictions for 2014:

  1. Hybrid Clouds Become the Dominant Vision for Enterprise IT
  2. Hunger Games Begin for All-Flash Startups
  3. If You Work in IT, You Are a Service Provider
  4. Reality Versus Hype Becomes Clear Around Software-Defined Storage
  5. Storage Virtual Machines Enable Data Mobility and Application Agility
  6. OpenStack Survives the Hype, Moves Beyond Early Adopters
  7. Questions on Data Sovereignty Impact Private and Public Storage
  8. 40GbE Adoption Takes Off in the Data Center
  9. Big Data Evolves from Analyzing Data You Have to Driving the Collection of New Data
  10. Clustered Storage, Converged Infrastructure, Object Storage, in-Memory Databases All Continue Their Momentum in 2014

Visit the blog to get Kidd’s liner notes for each prediction.

Communities.netapp.com

No Comments »

Innovation Stack Alters Data Center Paradigm

Virtualized model links hardware, cloud

On his list of predictions for 2014 (see article below), NetApp CTO Jay Kidd claims that converged infrastructure “will become the most compelling building block of data center infrastructure.”

eWeek’s Chris Preimesberger and NetApp created a slideshow that highlights the reasons why 2014 may be the year that converged infrastructure — or the Innovation Stack — takes off. The new paradigm offers a flexible, scalable, and agile model for making new and existing IT hardware more efficient and cloud-ready.

eWeek defines the innovation stack as “a heterogeneous infrastructure that combines IT from best-of-breed vendors and/or open-source communities at each layer to deliver a sophisticated architectural model that addresses the full needs of the data center.”

“New business pressures, such as the need for real-time information and collaborative applications, are putting the effective storage and accessibility of business data at the center of the new IT,” Preimesberger writes. “This, naturally, is requiring more agility from IT systems; the newer ones generally include this, but the older ones usually need upgrades.“

The six layers of the innovation stack are storage media; storage; network; processing; business logic; and presentation. In the new stack, these layers are all implemented as services that manage their respective dynamic resource pools rather than statically defined virtual machines.

The article points to Amazon Web Services (AWS) as a prime example of the innovation stack, as it includes an extensive layer of services for developers, including multiple database options, and supports multiple programming languages with intuitive management tools.

Eweek.com

No Comments »

World’s Largest Laser Runs on Data ONTAP

Downtime not an option for research tool

The world’s largest laser — 100 times more energetic than any previous laser system — resides at Livermore National Laboratory’s National Ignition Facility (NIF). Scientists are investigating defense and energy applications with the laser.

As Mike McNamara writes on NetApp’s blog, downtime is not an option for the system, so NIF retired most of its legacy storage and deployed NetApp FAS3250 and FAS3220 storage systems running the clustered Data ONTAP operating system to provide nondisruptive operations.

“Each time the laser is fired at a target, nonrelational object data produced by scientific instruments (about 50TB per year) is captured in files on network-attached storage, which must be accessible 24/7 for physicists to analyze,” McNamara writes. “Algorithms then generate representations of the x-rays, plasmas, and other scientific phenomena that are stored as relational data in Oracle databases.

“An eight-node NetApp cluster stores the virtual machine operating system images, while a four-node NetApp cluster stores scientific data in Hierarchical Data Format (HDF) to be ingested to Oracle SecureFiles. 800 Linux virtual machines connect to the NetApp NFS cluster over a 10GbE network.

“NetApp’s unified scale-out architecture allowed NIF to maintain constant availability for very large amounts of data. NIF anticipates eliminating up to 60 hours of planned downtime annually, maximizing facility availability. In addition, all of the NetApp storage systems can be managed as a single logical pool that can seamlessly scale to tens of petabytes and thousands of volumes.”

Communities.netapp.com

No Comments »

Older Entries »