Cloud Strategies Blog

    Cloud Solutions Blog

    Top 10 Strategies to Manage Cost and Continuously Optimize AWS - Strategy #7 - Anonymization is the Enemy of Accountability - Tagging to the Rescue

    Posted by HyperGrid on Fri, Mar 01,2019 @ 11:09 AM

    This is part 7 of a ten part series, "Top 10 Strategies to Manage Cost and Continuously Optimize AWS," which is excerpt from an eBook by the same name, produced in cooperation from AWS by HyperGrid Cloud Management Platform. A link to download the entire eBook with all 10 strategies is located at the bottom of this post.

    TaggingCLOUD STRATEGY #7 - ANONYMIZATION IS THE ENEMY OF ACCOUNTABILITY—TAGGING TO THE RESCUE

    SITUATION AWS customers use accounts as units of autonomy and as a security boundary between environments. Often, accounts are linked to VPCs and not to organizational or business units. This makes it hard to create the linkage between spend and business initiatives. In recognition of this issue, AWS has enabled a new service called AWS Organizations to enable management of accounts on organizational boundaries. However, this is a relatively new tool, and a number of customers already have an account structure in place that is based on previous best practices.

    COMPLICATION AWS bills are very detailed; running into thousands of lines is quite common and the “million line bill” is not unheard of. Famously, Netflix tweeted in 2016 that their AWS bill was over 700 million lines long. With VPCs—and not organizational boundaries—often defining the operational boundary of an account, linking spend back to account is often not sufficient to create linkage between business initiatives and AWS spending.

    IMPLICATION If you lose the direct linkage between operations and spend, you run the risk of removing cost accountability from the individual account owners. This flies in the face of good cloud governance. Good cloud governance means empowering your users to monitor the financial impact of their decisions, as well as giving them the tools to optimize costs.

    POSITION AWS has enabled tags across nearly all of its products and services, making it the best way to group and identify resources. They make it possible to automate the process of cost-allocation and are often the only way to create a linkage between resource utilization and business initiatives. Proper use of tags is critical to maintaining good cloud governance.

    ACTION A first step to using tags is to set up a tag strategy: define a set of tags that are mandatory (e.g., ProjectID, OwnerContact, ApplicationName, and so forth). These must be attached to any resource. Since AWS tags are not mandatory, it is important to monitor and enforce their use. Use HyperCloud Analytics to find resources that are untagged and set up a mechanism to enforce their tagging, or else you should disable them on a set schedule.

    Enable the “Cost Allocation Tags” feature to ensure that the user-created tags are visible from the billing console and use Cost Explorer or HyperCloud Analytics to create custom views based on tags. Enforcing a well-defined tag structure is important to ensure that all resources can be accounted for in a cost-usage report. For organizations that wish to automate this process, a cloud management platform with built-in lifecycle management (such as the App Store in HyperCloud) is necessary.

    The platform carries out the details of creating and managing tags, assigning quotas to individual accounts, and automating chargeback based on predefined policies. The HyperCloud Security and Compliance service can detect and report all deviations from established tagging policy and allow administrators to take one-click automation actions to remediate these issues.

    BENEFITS Enforcing a well-defined tagging policy is the foundation, not just for automating cost-allocation and enabling chargeback, but is also critical to group AWS resources for other governance tasks as well.

    Webinar - Optimize Cost & Mitigate Risk - 021519

    Sponsored by:
    BrightTalk Logo-1 New Solutions Blue wClouds n tag 480x270 HG Logo-1
    CLICK HERE to see this recorded webinar.
    Greybeard Consulting's President, Chris Gerhardt is featured. Chris talks about how tools like HyperGrid accelerate cloud adoption and streamline IT, while cutting costs and mitigating risk.

    Download the complete, "Top 10 strategies."

    EBOOK-10 Strategies to Continuously Optimize AWS L-O 1           "Top 10 Strategies to Manage Cost and Continuously Optimize AWS"Read all ten tips now. Download
     

    Topics: cloud computing, cloud, cloudmanagement

    Top 10 Strategies to Manage Cost and Continuously Optimize AWS - Strategy #6 - Centralize Governance of Reserved Instances

    Posted by HyperGrid on Wed, Feb 27,2019 @ 10:02 PM

      This is part 6 of a ten part series, "Top 10 Strategies to Manage Cost and Continuously Optimize AWS," which is excerpt from an eBook by the same name, produced in cooperation from AWS by HyperGrid Cloud Management Platform. A link to download the entire eBook with all 10 strategies is located at the bottom of this post.

    CLOUD STRATEGY #6 - CENTRALIZE GOVERNANCE OF RESERVED INSTANCES

     

    EC2 RI-1

    SITUATION

    A common AWS billing model is to have a centralized account with Consolidated Billing, linked to a number of autonomous accounts. With this model, it is common for individual accounts to purchase Reserved Instances based on their individual usage patterns.


    COMPLICATION

    Individual accounts may not be linked to each other—the topology is a star, not a mesh. Price breaks due to reservations are isolated in individual accounts, or accounts that are linked to it. This means unused RIs cannot be shared between accounts in this topology.

    IMPLICATION

    Wastage—RIs purchased in an individual account but not utilized by this account cannot be reused. Also, since the individual accounts are no longer responsible for billing, they may not have the hygiene in place to discover and repurpose or resell unused RIs.

    POSITION

    Reserved Instances should be procured and managed centrally. Purchasing an RI is only the beginning; you should have a process in place to continuously monitor RI utilization and modify unused RIs (split/join or exchange convertible RIs) to maximize their usage.

    ACTION

    Use master accounts to manage the lifecycle of Reserved Instances—recommendations, procurement, and monitoring. This way, they can flow to all linked accounts. HyperCloud Analytics is a great place to start for discovering opportunities for Reserved Instances. HyperCloud can easily map all your member and master accounts; plus, it can offer a single consolidated view of all instances across all members, and the analytics to make decision-making across all member accounts easier.

    Enroll all stakeholders in RI purchases. While RIs can be procured centrally, the individual account owners should be involved in the process of analyzing and approving the process of RI procurement. Ensuring that all stakeholders are empowered to understand the cost implications of their actions is part of good cloud governance.

    There will be exceptions: pricing breaks due to RIs can flow between accounts, but capacity reservations are localized to purchasing account. Therefore, there may be workloads (e.g., mission-critical applications or a DR workflow) that need localized management to ensure capacity reservation.

    We also recommend having organizational standards around instance types, increasing the

    probability that an unused RI will be picked up and applied to an instance in another account. This is not always feasible, however, because individual vendors and business needs drive instance selection.

    BENEFITS

    RIs are the best pricing breaks you can get in EC2. By creating a centralized RI governance model, you can ensure that you capture the maximum benefit from an RI investment.

    SITUATION

    Cloud app planning is a much broader discipline than just instance selection; it requires taking a look at the “whole application”—that is, the application itself and the components that make up the deployment (e.g., load balancers, proxy servers, cache, database servers, log management, performance management, and so forth).

    COMPLICATION

    Cloud app planning means mapping the data transfer between these and choosing an optimal resource placement that takes into account such factors as network and data transfer costs and data durability.

    IMPLICATION

    Lack of a holistic view of costs before deploying the “whole application” can often lead to pricing surprises. For example, before you deploy an application in an HA configuration, you should be able to quantify the network characteristics of the application before the deployment and understand their effect on pricing. An application that generates a lot of traffic could generate more in cross-AZ and cross-region transfer costs than the business benefit of High Availability.

    POSITION

    Applications need to be modeled in their entirety, not just the individual instances. Their interactions and network traffic between components need to be captured and analyzed before making recommendations on placement.

    ACTION

    When planning your application, make sure that you treat your application as a collection of services that communicate with each other, and not just as a collection of individual instances. For example, before deploying an HA architecture across regions, factor in the cost of having your application communicate across regions. Similarly, before deciding on S3 as a storage mechanism, factor in not just your storage costs, but also your retrieval costs.

    Our solution, HyperCloud, has a Cloud App Planning service that allows you to model an entire application, with all its dependencies. The service will recommend a placement for the application (along with costs) that factors in such details as data and network transfer costs and storage tiers. It also allows for easy what-if scenarios that help with optimal placement of test, dev, production and DR instances of the application.

    BENEFITS

    By modeling applications in their entirety and factoring the bigger picture into not just instance choice but also placement, you can ensure that you have a durable deployment architecture without pricing surprises.

    Webinar - Optimize Cost & Mitigate Risk - 021519

    Sponsored by:

    BrightTalk Logo-1 New Solutions Blue wClouds n tag 480x270 HG Logo-1

    CLICK HERE to see this recorded webinar.


    Greybeard Consulting's President, Chris Gerhardt is featured. Chris talks about how tools like HyperGrid accelerate cloud adoption and streamline IT, while cutting costs and mitigating risk.

    Download the complete, "Top 10 strategies."

    EBOOK-10 Strategies to Continuously Optimize AWS L-O 1

     

     

     

     

     

     

     

     

     

     

     

    "Top 10 Strategies to Manage Cost and Continuously Optimize AWS"

    Read all ten tips now. 

    Download

     

    Topics: cloud computing, cloud, cloudmanagement

    Top 10 Strategies to Manage Cost and Continuously Optimize AWS - Strategy #5

    Posted by HyperGrid on Wed, Feb 27,2019 @ 10:44 AM

      This is part 1 of a ten part series, "Top 10 Strategies to Manage Cost and Continuously Optimize AWS," which is excerpt from an eBook by the same name, produced in cooperation from AWS by HyperGrid Cloud Management Platform. A link to download the entire eBook with all 10 strategies is located at the bottom of this post.

    AppsCloud Strategy #5 Model Applications, Not Instances

     

    SITUATION

    Cloud app planning is a much broader discipline than just instance selection; it requires taking a look at the “whole application”—that is, the application itself and the components that make up the deployment (e.g., load balancers, proxy servers, cache, database servers, log management, performance management, and so forth).

    COMPLICATION

    Cloud app planning means mapping the data transfer between these and choosing an optimal resource placement that takes into account such factors as network and data transfer costs and data durability.

    IMPLICATION

    Lack of a holistic view of costs before deploying the “whole application” can often lead to pricing surprises. For example, before you deploy an application in an HA configuration, you should be able to quantify the network characteristics of the application before the deployment and understand their effect on pricing. An application that generates a lot of traffic could generate more in cross-AZ and cross-region transfer costs than the business benefit of High Availability.

    POSITION

    Applications need to be modeled in their entirety, not just the individual instances. Their interactions and network traffic between components need to be captured and analyzed before making recommendations on placement.

    ACTION

    When planning your application, make sure that you treat your application as a collection of services that communicate with each other, and not just as a collection of individual instances. For example, before deploying an HA architecture across regions, factor in the cost of having your application communicate across regions. Similarly, before deciding on S3 as a storage mechanism, factor in not just your storage costs, but also your retrieval costs.

    Our solution, HyperCloud, has a Cloud App Planning service that allows you to model an entire application, with all its dependencies. The service will recommend a placement for the application (along with costs) that factors in such details as data and network transfer costs and storage tiers. It also allows for easy what-if scenarios that help with optimal placement of test, dev, production and DR instances of the application.

    BENEFITS

    By modeling applications in their entirety and factoring the bigger picture into not just instance choice but also placement, you can ensure that you have a durable deployment architecture without pricing surprises.

    Webinar - Optimize Cost & Mitigate Risk - 021519

    Sponsored by:

    BrightTalk Logo-1 New Solutions Blue wClouds n tag 480x270 HG Logo-1

    CLICK HERE to see this recorded webinar.


    Greybeard Consulting's President, Chris Gerhardt is featured. Chris talks about how tools like HyperGrid accelerate cloud adoption and streamline IT, while cutting costs and mitigating risk.

    Download the complete, "Top 10 strategies."

    EBOOK-10 Strategies to Continuously Optimize AWS L-O 1

     

     

     

     

     

     

     

     

     

     

     

    "Top 10 Strategies to Manage Cost and Continuously Optimize AWS"

    Read all ten tips now. 

    Download

     

    Topics: cloud computing, cloud, cloudmanagement

    Top 10 strategies to Cut Cost and Continuously Optimize AWS - #4 Leverage Reserved Instances EC2

    Posted by HyperGrid on Thu, Feb 14,2019 @ 10:13 AM

      This is part 4 of a ten-part series, "Top 10 Strategies to Manage Cost and Continuously Optimize AWS," which is an excerpt from an eBook by the same name, produced in cooperation from AWS by HyperGrid Cloud Management Platform. A link to download the entire eBook with all 10 strategies is located at the bottom of this post.


    EC2 RI

     

    Strategy #4 - LEVERAGE RESERVATIONS TO REDUCE COSTS—START WITH EC2

    SITUATION

    Reservations are the easiest way to optimize your AWS costs because using them requires making no technology changes. Reserved Capacity is available for various services; EC2 Reserved Instances are the most popular, due to the relative size of EC2 spend over other services.

    COMPLICATION

    Accuracy in reservations is key. Reservations are not guaranteed to be applied towards your bill. They apply only if your utilization matches the terms of the reservation. They can only be purchased in 1- and 3-year variants; it can be hard to plan that accurately and that far in advance.

    IMPLICATION

    Managing your reservations is just as important as purchasing them; unused reservations are one of the easiest ways to lose money in AWS. Not only did you spend money on a resource you did not use, you probably had to spend again on another instance that matches your needs more closely.

    POSITION

    EC2 Reserved Instances have the most complicated structure of all reservation types due to the abundance of instance options; however, they also provide the greatest flexibility. Understanding and leveraging this flexibility is key to being able to leverage the pricing breaks offered by reservations.

    ACTION

    Look at your EC2 estate and find opportunities for saving via reservations. There are typically plenty of low-hanging fruit—long-running instances that haven’t changed in a long time. To do this at scale, HyperCloud can help you pick from curated RI options based on deep insights from your historic cloud and instance usage patterns and real time price vs. performance.

    Start by purchasing Convertible Reserved Instances; they offer a smaller discount but provide a trade-in capability that makes a very big difference to the inexperienced user. With experience under your belt, you can venture out to Standard Reserved Instances to save even more.

    A simple best practice is to standardize on one or two instance types, which greatly increases the probability that an unused reservation can be applied against an instance. This can be difficult for an organization that isn’t building their own services and relying on recommendations from individual vendors.

    Nearly all Reserved Instances can be split or combined, as long as you stay within the bounds of the overall Reservation. HyperCloud provides reports on RI Utilization and Waste, which can be used to find unused RIs and discover opportunities to apply them towards existing instances.

    Be a bargain hunter. Browse the AWS Reserved Instance Marketplace for an RI that may meet your exact needs better than a brand new 1-year and 3-year RI. Note that the Marketplace only lists Standard Reserved Instances, and not Convertible.

    If HyperCloud identifies a Standard Reserved Instance lying unused, sell it on the RI Marketplace to recover cost. Before you list your instance on the RI Marketplace, you can modify it (e.g., split a large instance, combine smaller instances, exchange an RI on an unpopular instance size for a more mainstream one, and so forth) to make it easier to sell.

    Purchasing RIs with all payment up-front is the best discount that you can get from AWS. However, when you try to sell an unused RI on the RI Marketplace, AWS will charge you a fee based on the up-front price you charge, which can make up-front RIs more expensive to sell. This makes it even more important to use a predictive analytics tool like HyperCloud to help with your proactive RI selection.

    BENEFITS

    Reservations allow you to get pricing breaks on AWS instances without making any changes to your deployment. Intelligently purchasing Reserved Instances is important but managing Reserved Instances and ensuring that they are being applied towards utilization will have an even bigger impact on AWS costs.

     

    Webinar - Optimize Cost & Mitigate Risk - 021519

    Sponsored by:

    BrightTalk Logo-1 New Solutions Blue wClouds n tag 480x270 HG Logo-1

    CLICK HERE to see this recorded webinar.


    Greybeard Consulting's President, Chris Gerhardt is featured. Chris talks about how tools like HyperGrid accelerate cloud adoption and streamline IT, while cutting costs and mitigating risk.

    EBOOK-10 Strategies to Continuously Optimize AWS L-O 1

     

     

     

     

     

     

     

     

     

     

     

    Download the complete eBook here

    "Top 10 Strategies to Manage Cost and Continuously Optimize AWS"

    Read all ten tips now. 

     Download

     
     

    Topics: cloud computing, cloud, cloudmanagement

    Top 10 AWS strategies to Cut Cost and Optimize - #3 Embrace Serverless

    Posted by HyperGrid on Wed, Feb 13,2019 @ 05:22 PM

      This is part 3 of a ten-part series, "Top 10 Strategies to Manage Cost and Continuously Optimize AWS," which is an excerpt from an eBook by the same name, produced in cooperation from AWS by HyperGrid Cloud Management Platform. A link to download the entire eBook with all 10 strategies is located at the bottom of this post.


    Lambda LogoLOOK FOR LIFE BEYOND EC2—EMBRACE SERVERLESS
    COMPUTING

    SITUATION - AWS Serverless Computing (Lambda) provides a DevOps-friendly, provision-free, highly scalable, and highly available environment automatically, without having to build the supporting infrastructure in EC2. As a result, AWS Lambda is increasingly becoming a deployment architecture of choice for web services and data processing workflows.

    COMPLICATION - Uncertainty around pricing often leads to discomfort in choosing Lambda for large-scale deployments. EC2 may have its complications with overlapping instance types, but once you’ve chosen, there are no pricing surprises. In contrast, Lambda pricing is multi-dimensional and based on number of transactions, memory allocated to the function, and runtime—each of which can change many times according to business requirements.

    IMPLICATION - A poorly architected Lambda deployment may end up costing much more than its EC2 equivalent. For example, running 30 transactions per second—with each transaction having a run-time of one second consuming 512MB of memory—will cost $1304.90 on Lambda. A c5.2xlarge instance could most likely run the same workload with similar performance and cost less than a fifth of Lambda ($248.88 monthly).

    POSITION - A good rule-of-thumb for deciding on Lambda is as follows: If you have a function that is not invoked often, Lambda will almost always be cheaper than EC2. If you have a function that is invoked often but is not resource-intensive, then it will most likely be cheaper to run this in Lambda. However, this should be validated during load testing. If you have a resource-intensive function that runs very frequently, then it is possible that an EC2-based architecture—regardless of its elegance—may be the right choice. Alternatively, it could simply be that the function has been over-provisioned with memory; perhaps the function could be optimized in terms of run-time or memory utilization.

    Lambda pricing is a combination of a number of invocations and GB-seconds used by the function. The number of invocations is a function of business success, not an architectural attribute. This leaves GB-seconds as the only lever to optimize pricing: allocate more memory and the runtime will reduce—or vice versa. Optimizing Lambda costs, then, is all about adjusting the lever between memory allocation and execution time to find the optimal balance.

    ACTION - Since Lambda functions read completely headless, understanding their behavior and optimizing cost tends to be more complicated than any other AWS service. A number of native AWS tools can be used to understand the behavior of Lambda functions: CloudWatch natively supports a number of Lambda metrics. The Metric Filters function of CloudWatch Logs can be used to convert logging data into additional CloudWatch Metrics. This may require additional instrumentation of your Lambda functions, but the overall benefits are worth the effort. Once these are in place, you should run your function, at scale, with different memory allocations, and use CloudWatch to measure execution time and calculate cost at scale. When you have this data, you can determine the best cost tradeoff between memory and run-time.

    At HyperGrid, we felt there was a better way. We have added a Lambda monitoring and optimization service to our HyperCloud Analytics platform to support Lambda analysis and optimization. Capabilities include: Visualize spending on Lambda and detailed metrics on each Lambda function, including key parameters like resource allocations, run time, timeouts, error rates, and so forth. Easily identify resource wastage—functions with a high error rate, over-allocation of memory, under-allocated functions (typically those with a high timeout rate), and so forth. Make recommendations on right-sizing Lambda functions by adjusting the balance between run-time and memory allocation. Compare the cost of continuing with Lambda vs. running an equivalent EC2 environment.

    BENEFITS - The ability to analyze and optimize Lambda costs will not only result in a more DevOps friendly and scalable architecture, it will also lower your operational costs by reducing the number of different types of resources that need to be managed.

    Webinar - Optimize Cost & Mitigate Risk - 021519

    Sponsored by:

    BrightTalk Logo-1 New Solutions Blue wClouds n tag 480x270 HG Logo-1

    CLICK HERE to see this recorded webinar.


    Greybeard Consulting's President, Chris Gerhardt is featured. Chris talks about how tools like HyperGrid accelerate cloud adoption and streamline IT, while cutting costs and mitigating risk.

    EBOOK-10 Strategies to Continuously Optimize AWS L-O 1

     

     

     

     

     

     

     

     

     

     

     

    Download the complete eBook here

    "Top 10 Strategies to Manage Cost and Continuously Optimize AWS"

    Read all ten tips now. 

     Download

     

    Topics: cloud computing, cloud, cloudmanagement

    Top 10 Strategies to Manage Cost and Continuously Optimize AWS - Strategy #2 - STAY SMART

    Posted by HyperGrid on Tue, Feb 12,2019 @ 09:57 AM

      This is part 2 of a ten-part series, "Top 10 Strategies to Manage Cost and Continuously Optimize AWS," which is an excerpt from an eBook by the same name, produced in cooperation from AWS by HyperGrid Cloud Management Platform. A link to download the entire eBook with all 10 strategies is located at the bottom of this post.


    Stay SmartStrategy #2. STAY SMART—USE CONTINUOUS, DATA-DRIVEN INSTANCE OPTIMIZATION SITUATION

     

    Choosing the right instance is only the first step toward cost optimization. The cloud is a very dynamic environment. AWS is expanding choices for instance type regularly, and your business needs are constantly evolving. Making a data-driven choice is a great start, but it is just as important to stay on your feet and constantly evaluate choices available for optimization.

     

    COMPLICATION
    Instance Inertia: even though the process of making a change is simple enough, it can be difficult to make a change without having any conclusive evidence of either cost gains or performance improvements.

    IMPLICATION
    The cloud presents a number of arbitrage opportunities: instance families, generations, types, and regions. The pricing differences between these can be leveraged in the interest of cost optimization. However, the large number of choices available can make it very difficult to do this manually, and instance-optimization at scale needs a data-driven solution.

    POSITION
    Continuous re-evaluation of your instance choices will help ensure that you’re always staying ahead of the pricing changes in AWS, as well as your business requirements. However, this does not mean you should “change for the sake of change.” Change comes at a cost and with its own risks; you should have a mechanism in place to perform a cost-benefit evaluation before making a change.

    ACTION
    This can be difficult to do at scale, so we’ve added an Instance Optimization service in our HyperCloud Analytics product. This service allows you to leverage the different arbitrage opportunities available within AWS. By continuously sizing and pricing all alternatives in the EC2 family, HyperCloud Analytics is able to suggest alternative instance types and families (or even generations) that can meet your SLA requirements at a reduced cost. By presenting the cost savings, HyperCloud Analytics allows you to use financial justification to drive proactive changes in cloud usage across the organization.

    BENEFITS
    Continuous instance optimization will help you stay ahead of the pricing curve and leverage cost-optimization without investing in any structural or architectural changes to your applications.

    Webinar - Optimize Cost & Mitigate Risk - 021519

    Sponsored by:

    BrightTalk Logo-1 New Solutions Blue wClouds n tag 480x270 HG Logo-1

    CLICK HERE to see this recorded webinar.


    Greybeard Consulting's President, Chris Gerhardt is featured. Chris talks about how tools like HyperGrid accelerate cloud adoption and streamline IT, while cutting costs and mitigating risk.

    EBOOK-10 Strategies to Continuously Optimize AWS L-O 1

     

     

     

     

     

     

     

     

     

     

     

    Download the complete eBook here

    "Top 10 Strategies to Manage Cost and Continuously Optimize AWS"

    Read all ten tips now. 

     Download

     

    Topics: cloud computing, cloud, cloudmanagement

    Top 10 Strategies to Manage Cost and Continuously Optimize AWS - Strategy #1 - START SMART

    Posted by HyperGrid on Sun, Feb 10,2019 @ 02:30 PM

      This is part 1 of a ten part series, "Top 10 Strategies to Manage Cost and Continuously Optimize AWS," which is excerpt from an eBook by the same name, produced in cooperation from AWS by HyperGrid Cloud Management Platform. A link to download the entire eBook with all 10 strategies is located at the bottom of this post.

    Strategy #1 Start Smart - PROVISION THE INSTANCE YOU NEED—NO MORE, NO LESS SITUATION

      Amazon Elastic Compute Cloud (Amazon EC2) web service provides secure, resizable compute capacity in the cloud. EC2 is also the primary contributor to an AWS bill and therefore the best candidate for cost-optimization. EC2 is the service with the largest menu of choices, making it difficult to accurately decide on resource type.

    For example, with nearly 150 instance types to pick from, EC2 offers 10 choices for 4 vCPU 32GB, with a price differential of 3x separated by optimization type: CPU, memory, storage, and so forth.

    COMPLICATION

    We see many organizations choose an instance type based on generic guidelines that do not take their specific requirements into account. Think your application is CPU-driven? Choose the “C” family. Think memory is your constraint? Choose an “R.” Don’t know? Just choose “T” because you can always change it later.

    IMPLICATION

    EC2 instance types are sized and priced exponentially, so choosing the wrong type can result in a significant pricing premium—or a performance penalty.

    POSITION

    Your instance choice should be data-driven. A typical organization has already invested fairly significantly in performance management solutions. With a substantial amount of historical data already available, there is no reason to throw that away and make instance type choices based on best practices or generic guidelines.

    ACTION

    Leverage your existing IT systems of record to help provide data-driven choices of instance type. Yes, the cloud makes it easy to effect changes, but when you’re operating at scale, there’s no substitute for getting it right the first time.

    At HyperGrid, we have made instance planning a cornerstone of our HyperCloud™ Analytics solution. By extracting performance and utilization data from your existing performance management systems, and by correlating them with millions of sizing options and pricing points in the cloud, HyperCloud Analytics helps you choose the best instance type for your needs.

    BENEFITS

    Customers who use data-driven instance choices typically see a cost reduction of about 50 percent, compared to those who use generic guidelines or best practices to choose their instance types.

    Webinar - Optimize Cost & Mitigate Risk - 021519

    Sponsored by:

    BrightTalk Logo-1 New Solutions Blue wClouds n tag 480x270 HG Logo-1

    CLICK HERE to see this recorded webinar.


    Greybeard Consulting's President, Chris Gerhardt is featured. Chris talks about how tools like HyperGrid accelerate cloud adoption and streamline IT, while cutting costs and mitigating risk.

    EBOOK-10 Strategies to Continuously Optimize AWS L-O 1

     

     

     

     

     

     

     

     

     

     

     

    Download the complete eBook here

    "Top 10 Strategies to Manage Cost and Continuously Optimize AWS"

    Read all ten tips now. 

     Download

     

    Topics: cloud computing, cloud, cloudmanagement

    What the heck is a "Cloud Services Brokerage?

    Posted by Dave Bondo on Tue, Feb 13,2018 @ 09:18 AM

    Bob overwhelmed with textNot so many years ago, the life of an IT executive was only slightly complicated. You had to manage network, systems, storage, people, processes and tools. Your list of decisions were centered around how to architect, build, size, scale and manage the life-cycle of traditional IT services. More than enough confusion for anyone. But, nothing compared with today's demands on an IT exec.

    Not only do these demands still exist, you now have the added consideration of cloud computing and A-Z as a service. Added to that are the demands on WAN infrastructure so that your cloud-delivered services are readily available and secure. Add up your new duties plus, how do you install controls that are effective and easiest managed? Including, through which channels will you purchase technology. The, "Can't get fired for buying #1 brand," philosophy is flanked by hundreds of new cloud-based technologies from an emerging database of specialized suppliers. Sometimes the new guy has the best answer.

    If you're considering cloud options, you can't just go with what you know. You have to dig to find the best and the brightest, even among the newbies. Gartner is a good tool for learning. However, it does not monitor actual engagements of their rated suppliers. They do not have behind-the-scenes info on project successes or failures.

    Now, imagine for a moment, a network of suppliers that are governed by a central distribution channel. A channel that vets suppliers and sets up engagement criteria for their member agents who operate in your local market. Consider a cloud broker, like Liftoff IT, who may operate in your market and represent this channel of suppliers to end-user customers like you. The agents focus on understanding your business and understanding your goals for technology. Then, by collaborating with technical experts from the governing body of the network, they scan the member suppliers for not only best of breed technology but ones with a history of success and failure.

    Your broker-consultant comes back to you with a recommended solution and a short-list of suppliers that are best suited to solve your technology puzzle. Your local agent is an independent, vendor-agnostic, trusted partner who receives compensation from the member network, not directly from the supplier.

    A vendor-agnostic solutions provider, like Liftoff IT, serves an important role within your company. Through this network, you are availing yourself of hundreds of potential solutions providers without having to interview every one of them. And because our network is the biggest client of the member suppliers, you will always get attention when you need it as well as the best price, regardless of the size of the independent agent you're working with.

    There is still a role for the traditional systems integrator. But, why not add the dimension of an independent agent who can broaden the assortment of suppliers from which you pick, while shortening your amount of involvement. Your partner, brokerage is a business model more like financial services than the traditional VAR. More suppliers, more options, more protection, better vetted solutions with less headache.

    Want to know the coolest supplier for the project you're working on? 

       Find Out Now

     

     

    Topics: cloud computing

    5 Steps to Successful Office 365 Adoption

    Posted by Dave Bondo on Fri, Feb 10,2017 @ 12:12 PM

    5 Adoption Phases.jpgMicrosoft now states that there are 1.2 billion Office users in the world and some 60 million commercial accounts. As well, over 85% of the Fortune 500 have a Microsoft product in the cloud and there are now over 50 million new small business clients migrating to O-365 every month, per Microsoft. Five and a half years and adoption numbers like this clearly means that Office 365 has certainly arrived at some level of maturity.

    Since the office productivity suite, like O-365 is often the first application moved to the cloud, many cloud service providers have a laser-like focus on developing their Office 365 cut-over migration product into a well-oiled solution. Success with this project positions the service provider for eventual, additional application migrations and market share growth. Cloud adoption forecasts into the next three years, show that even the most conservative and stayed organizations will be ready to move their office productivity suite to a cloud-based environment. In a research study done by 451 Research, during the next three years, 70% of mainstream buyers will have 50% or more of their computing in the cloud. This means that cloud computing is no longer an experiment. It is here to stay and more importantly, the market is going to get really, big, really fast.

    This may be good news for you as a shopper for services. Because of the maturation of this market there are some solution providers who have been facilitating these migrations for almost as many years as the applications have been available. And they have got it down. For example, we have one solution provider in our portfolio of professional service companies who can boast 20 million user migrations. The refinement of their approach is time-tested and quite mature.

    Proven 5-phase method for successful Office 365 cut-over migration.

    Phase 1 | Value Envisioning Workshop (VEW)

    The Value Envisioning Workshop is the primary and most critical step in the user adoption program. This step involves a four-hour onsite session where a variety of business executives (Marketing, HR, Operations, etc.) and our change management experts come together to create a vision for the organization. This comprehensive, structured conversation is built to generate insights on current business problems and set specific goals to help construct an adoption plan.

    • Create Vision
    • Identify Business Use Cases
    • Set Goals
    • Define Success Criteria

    Phase 2 | O-365 Planning and Preparation

    During this phase, the user adoption plan is constructed and materialized to set the stage for success in the upcoming phases. The deliverables will be tailored to each organization’s individual needs, involving the user adoption labs and other engagements to construct a user adoption plan that maximizes business value.

    • Secure Executive Sponsor
    • Engage User Adoption Labs
    • Construct User Adoption Plan
    • Create Customized Marketing and Communication Plan

    Phase 3 | Awareness

    Company-wide awareness through effective communication is vital to the success of any trans-formative IT project. This step, guided by heavy involvement of change management consultants, involves deploying strategies and creating materials developed in the previous phase; the goal is to get the entire organization on board with the vision developed by the business executives during the Value Envisioning Workshop.

    • Customized, Multi-Phase Communication Plans
    • Development of Marketing Materials
    • Deployment of Phase 1 Communication Plan

    Phase 4 | Deploy

    The deployment phase involves a variety of training methods, tailored to each individual organization’s needs, to teach employees, managers, and administrators how to use the new technology in order to accomplish their respective tasks more effectively.

    • Resource Site
    • Onsite Training
    • Remote Webinar Training
    • Self-Paced E-Learning Modules
    • Gamification

    Phase 5 | Measure & Iterate

    This phase is critical to realizing the true value and ROI of the IT project. This is where success criteria, such as employee sentiment or productivity improvements are defined and measured. This phase is also key for gathering success stories and determining future goals.

    • Pre/Post Deployment Surveys
    • User Satisfaction Surveys
    • Success Metric Dashboard & Analysis
    • Regular Success Metric Reviews
    • Additional Labs

    Post phase #5, on-going exercises are needed as feature and function changes necessitate reengaging this process again from the beginning. Hence on-going support above and beyond what Microsoft can offer is often a productive choice.

    The cost of implementing a program like this can often be absorbed using incentive funds from Microsoft. The amount, of course, depends on the number of licenses.

    I encourage you to contact us if you would like further information on services provided to make your migration to Office 365 a success.

    FREE e-Book - Updated Overview of Office 365 Click for e-Book

     

    Topics: cloud computing

    4 JIRA Service Desk Metrics Critical to IT Operations

    Posted by Colm Hayden on Thu, Jan 14,2016 @ 03:57 PM

    Jira_Logo.jpg

    Managing and optimising your IT infrastructure is becoming an increasingly central component in ensuring business success. With so many critical business functions taking place in the cloud, over the internet and via an ever-increasing number of devices, keeping an eye on your network service is essential in gaining the necessary amount of insight into the way your business functions. However, with the software tools we have access to today and the sheer amount of information they provide, it's easy to get lost in a sea of meaningless metrics and unimportant data.

    As the primary touch point between the users of your network and your support staff, your service desk can tell you a lot about the health of your IT infrastructure, and how efficient your operations are. In this blog, we'll discuss some of the most important Jira Service Desk Metrics, and how to use them to your advantage.

    New Tickets – Number & Department Based
    The first and most straightforward of the metrics you should be paying attention to is the generation of new tickets – which departments they come from, how many there are over a certain time period, the most common types of requests, etc. Being able to separate tickets according to the department they originate in gives you a broad idea of the health of each individual part of your organisation, and can alert you in good time to emergencies on your network. It's important to ensure each new ticket provides your service desk with enough information to resolve the issue as quickly as possible in order to minimise downtime and keep your workforce productive. To this end, it's essential that your ticket types are comprehensive and capable of catering to the unique needs of your business network. It's also essential to track the number of tickets solved – turning some tickets around in good time isn't worth much if you have others that sit unresolved for weeks on end.

    Time to First Response
    The Time to First Response is the amount of time it takes between the creation of a new ticket, and the first public comment by a member of support staff on that ticket. The longer it takes for your service desk staff to respond to a ticket, the higher your chances of disgruntled users and the less efficient your organisation will be. A significant part of this comes down to expectation management – generally speaking, users want the reassurance that someone has seen their ticket and is attending to their problem. A simple, short message acknowledging receipt of the ticket will suffice, and making this a standard part of your processes will go a long way towards building goodwill among users and employees alike.

    Requester Wait Time and Time to Resolution
    While Time to First Response is vital in managing your users' expectations, the time it takes to resolve this ticket and report back to the user who submitted it is even more important – not only in managing expectations, but to ensure as much uptime as possible. Requester Wait Time refers to the combined time a ticket remains unresolved while it is being worked on by support staff. The longer this time, the less efficient the support on that particular ticket has been. Time to Resolution, on the other hand, represents the time between a ticket being lodged and the ticket being solved. There are two distinctions to bear in mind here: Time to First Resolution specifies the time taken between the user logging the ticket and support staff solving it for the first time. Subsequently, the user may have further questions, or the solution applied by support staff may not have solved the problem on the client's side. Full Resolution Time refers to the amount of time taken from the ticket being opened to it being resolved for the last time.

    Number of Requests Resolved through use of Knowledge Base
    For organisations that deal with high volumes of support tickets on a regular basis, the load on your IT support staff can become debilitating before long. To this end, having a public knowledge base or wiki page that your users can access could provide some much-needed respite for overworked support engineers. However, it's vital to track how your users make use of these portals, and how many tickets have been successfully solved through your knowledge base. If the number of tickets solved through your knowledge base are lower than you'd like, you might need to improve the state of your wiki page, or do more to make customers aware of the fact that it exists. With a comprehensive and easy-to-use self-help platform, you could achieve higher resolution rates, while freeing up time for your support staff to attend to more complex and pressing concerns.

    Knowing the most valuable Jira Service Desk metrics is only the first step in ensuring that your IT infrastructure is as efficient as it can be. To get true value from this data, it's imperative to monitor and track these metrics as thoroughly as possible, and use that information to optimise and streamline your service desk wherever possible. To find out more about the most valuable Jira Service Desk metrics and how best to manage them, request a live demo with the ServiceClarity team.

     ServiceClarity Testimonial - Global Gaming Company