«

»

Jan 24

Key Questions for Storage in the Cloud

Storage for the Cloud is one of the least sexy and most expensive portions of any cloud deployment.  However, picking the correct storage in terms of functionality, performance, and ease of use can have significant benefits when it comes to creating the architecture for your overall cloud service environment. Bundled together and tightly integrated into your overall ITaaS environment, storage serves can act as the backbone that other “as a Service” offering are deployed on.   Being in the enterprise storage arena I’ve spoken to many customers who have the same question:  “Where do we start?” With little current guidance out there at the moment on integrating storage into the overall cloud services concept organizations are left scratching their heads and wondering what steps they need to take to be successful now and in the future.  To get an idea of how to answer this question though, organizations should ask themselves three key questions about the storage they need:  How big, how fast, and how safe?

Lets start with “How Big?”  This question deals with storage and storage controllers (or the “spinning rust” as I’ve heard said in the past) as well as storage savings technologies such as deduplication, FlexClones, and thin provisioning.  At the baseline, all enterprise storage is a collection of disks with key features and functionalities provided by the software on the storage controller. For NetApp, we have DataOntap, a software that provides performance, efficiency, and organization to the collection of disk. This collection of disks is what your environment will live on and just like any environment it needs enough space to grow and expand.  With that said, disks are a key component around which everything else is built or affects in some way.  So we’ll start by choosing the right disks for your environment. The types of disks found today are SAS, SATA, and SSD all of which have different sizes, performance, and cost. I won’t get into the specifics here, but SSD is the smallest and by far the highest cost and performance, SATA is the largest size and the lowest cost and performance, and SAS is in the middle. In a cloud service offering all three types may be available and should be able to be automatically deployed.  But that last point is for another post.

When looking at designing storage for the cloud, a solid growth plan is definitely needed.  Simply throwing disks willy-nilly at the cloud service offering isn’t efficient or a good use of resources.  Buying “just enough to cover” at the start means you will have to purchase more storage the moment your environment grows.  Buying too much means you have resources that are being wasted as they sit there waiting to be filled with all that wonderful data.  This requires big picture thinking.  Make a plan for three to five years that includes estimates for where you are now, where you’ll be in six months and where your environment will be in one year… three years.. five years.  Will these plans change?  Probably. But at least you have a guideline to go on. I’d personally recommend a one year cycle of purchasing with about 20% buffer added for storage sizing for buffering against those unexpected projects that always seem to rear their heads.  This allows your to adjust your spending on a yearly basis by looking back at your growth patterns for the previous year to predict the coming year.

If you have NetApp storage, Deduplication, thin provisioning, and FlexClones are technologies that also influence the “How Big?” question.  Each provides a means of saving valuable storage space (as much as 90% in some environments!).  So (getting up on a soap box) USE THESE TECHNOLOGIES TO SAVE SPACE.  I have seen multiple instances where customers who have all three available and do not use them. They are proven, they work, and they will save your bottom line.  When deployed with NetApp FlashCache the two will actually increase your performance.  ‘Nuff said.  (Stepping down off my soap box).

That leads to the “How fast?” question. Performance and the speed that your storage operates is key.  Some environments require the fastest performing disks (SSD) while others may require lower performance due the workload.  Generally speaking, the more performance you need, the more its going to cost.  NetApp has FlashCache and FlashPools and both allow for achieving SSD-like performance without the costs of a shelf of SSD disks.  Without the correct performance of the storage in a cloud environment you will experience delays and possible system stoppage. Therefore, architecting for performance should be a top concern.  I’d probably architect for performance over size because while you can expand your size through disk shelves, performance bottlenecks can sink an entire project before it gets off the ground.  You also aren’t just thinking of disk performance.  Network performance for your enterprise storage factors into the equation as well. After all, you may have all SSD drives in your storage, but if your network isn’t architected correctly to serve this data out then you are paying the premium for those SSDs without getting the benefit.

To draw an analogy, think of storage for cloud services as a sailboat.  Sailboats come in different sizes and speeds and each have different types of rigging.  Think of the disks as the sails and the performance of those disks as the wind. The biggest, most powerful sailboat in the world isn’t going anywhere if it doesn’t have enough wind. What your looking for with storage for the cloud is smooth sailing at an appropriate speed and the capability to go over the occasional big wave (i.e. peaks in storage utilization) that may come your way.

Finally there is “How Safe?”   This is usually the last area that an organization looks at but it can be the one that can potentially make or break a business.  Storage isn’t natural disaster proof.  They aren’t fireproof. They aren’t theft proof.  They also aren’t employee proof.  Each of these can destroy a storage environment.  All of them can cause a loss of business critical data and take down business critical applications that are running in your cloud.  Architecting a cloud solution with a solid backup and recovery AND disaster recovery plan is crucial in today’s world.  Being able to recover from someone deleting an important piece of key data either maliciously or accidentally as well as being able to fail your cloud over to another site in the event a flood hits your main datacenter are both invaluable.  The more safety and security you require for your cloud, the more it will cost.   In order to determine your needs, think of how much data can your afford to lose (recovery point objective or RPO) and how soon you would need to be back up and operational in the event of an issue that takes your datacenter down (recovery time objective or RTO).  Then, spend accordingly in order to meet those requirements.  It usually takes one use of a backup and recovery or disaster recovery solution for your cloud to make up for the cost of the solution.

If you can answer the “How Big?”, “How Fast?”, and “How Safe?” questions for the storage in your cloud  you are well on your way to a successful cloud deployment.  I’d definitely suggest you take the time and use these questions to begin the process for architecting your environment. Furthermore, if the concepts are taken out a bit more these questions could used to implement “Storage as a Service” environments with “dials” for your customers to determine each of these for yourself.  They need storage, they turn the dial until they get the amount they need.  They need performance, they turn this dial to determine this as well. And finally, if they need backup, recovery, and DR they can turn the dial to determine the level of this they need. As they turn the dial the price would obviously go up. However, this would allow the customer to determine the environment that meets both their budget and their needs.  The trick though, is setting this system up.  And yes, I realize that this may be just a bit “pie in the sky” today.  Still, one can dream cant they?

Thanks for your time!

-McCloud

Share Button

1 comment

  1. Mandy .k

    Thanks for the heads up. It never occurred to me that large companies could be at risk of theft from employees.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>