Connect with us

Startups

5 considerations for saving more and wasting less on cloud services

Published

on

This article was contributed by Aran Khanna, CEO of Archera

Anyone managing their team’s cloud services faces the gargantuan task of parsing through all hosting and purchasing options to find the right ones for their projects, products, and services. Purchasing one, three, or five-year commitments for your applications, such as AWS Savings Plans and Reserved Instances (RI), offers teams significant cost reductions with the tradeoff of getting locked-in with the services covered for the term length. The cost-saving benefits of these commitments are incredibly appealing, with discounts of up to 72% off on-demand pricing, but managing cloud service savings can be challenging. You can also risk wasting money if you don’t end up using or needing as much of the resource through your term length, which means commitment selection also requires balancing a need for flexibility with savings.

To illustrate the complexity and scope of this task, just look at AWS EC2 instances. There are close to 100,000 EC2 instance types that can be used to host your application, and for each of those there are over 36 different commitment types that can be applied to them. That’s just in EC2 alone (not to mention other hosting options like serverless). Now imagine adding other managed services from AWS, and from other cloud providers such as GCP and Azure, with their own unique options and commitment structures.

Selecting the right infrastructure and subsequent commitments can be incredibly beneficial to the bottom line. It can take a company’s unit economics from good to great or increase margins for low margin businesses, while still leaving engineering teams the flexibility to access the best resources to innovate their product. Many teams, however, miss out on opportunities for savings or generate waste because they did not consider certain elements of cloud service usage scenarios, contracts, or data when planning.

1. Your AWS bill will not give insights into the cost of goods sold

Making future commitments to cloud services requires thoroughly understanding how services are affecting profit and loss calculations for a business. Oftentimes, people rely on their AWS bill to find this information, but there are key pieces of information that are missing. The bill is a roll-up of costs and not highly granular, and does not enable you to allocate costs (and specifically the savings from centralized commitments) to teams and projects. Furthermore, there are many miscellaneous costs associated with AWS, such as Premium Support, that are unrelated to your cost of goods sold.

You can get a higher level of granularity in your data through properly tagging resources. Tag management is key to proper cost attribution but does require enforcing tag hygiene practices across teams and it still leaves you parsing costs apart in spreadsheets at the end of the month. Finding tools with auto-tagging as well as specific financial reporting will enable true visibility without burdening your team with additional work.

2. Historical data alone will not accurately forecast future utilization and costs or potential cloud service savings

Data on the historical usage of resources for a product or project is key to getting an understanding of the basic needs for your engineering and developer teams to plan and select commitments. However, it is often overlooked that past resource utilization will not always reflect future usage. Changes in business strategy, right-sizing or migration plans, and other external factors can lead to a drastic deviation from historical usage patterns. The net result is either over committing to resources and wasting money, or under-committing and missing savings opportunities.

To anticipate potential deviations from historical usage, consider modeling the impact of different scenarios on usage and costs. Scenario planning is a nuanced activity that takes place between engineering, finance, and operations teams.  Inputs such as right-sizing, migration, re-architecting, new projects, business growth, and financial best practices need to supplement historical data in estimating future costs and cloud service savings and usage to avoid over or under-commitment to services. For instances that could have variable future usage, you can choose more flexible commitments with shorter term lengths that can be exchanged or resold on the AWS Marketplace.

3. High coverage does not mean quality coverage

Teams often strive for a high level of commitment coverage across their infrastructure but do not think about the percentage savings from that coverage. Frameworks, such as the FinOps Maturity Model, include commitment coverage as a metric to measure the maturity of an organization’s cloud financial operations. A higher percent coverage indicates higher maturity.

The issue with prioritizing coverage is that not all commitments provide the same level of discount. Many of the “safest” commitments with the highest flexibility actually provide less than one third the savings rate of a less flexible commitment.  This can lead to situations where you have high coverage, but a low savings rate. You may actually be saving less than if you had lower coverage with a high savings rate. Companies that are not in a state of growth can actually become stuck in a position where they have very little flexibility to increase their savings rate and must simply wait for the contract terms to expire. Commitment coverage is a metric that delivers more value when considered with percentage savings to get an understanding of the net cost reduction your commitment strategy is driving. This is particularly important when you model different purchasing strategies to see whether higher coverage does provide the most savings.

4. Savings plans alone are not enough to maximize cloud service savings

Many companies achieve a high level of coverage using AWS Savings Plans because of their ease of use relative to RIs. Savings Plans allow a higher degree of flexibility than RIs because you can switch regions and coverage is immediately applied to any changes in instances. The flexibility provided by RIs still requires manual labor to exchange CRIs or resell Standard RIs.

While it’s undeniable, they are easier to use for some organizations, the issue lies in the fact that their high flexibility comes with a very low savings rate relative to RIs. When Savings Plans are used for workloads that will not leave the region they are hosted in (which is often the majority of workloads in a customer’s account), there is a lot of money that can be left on the table that other strategies such as RIs could have saved. Because the Savings Plan purchase is a one-way door, it becomes very hard to update the strategy to increase savings rates.

If you are trying to maximize your savings, you should consider a blend of different commitment types that also include Standard RIs in addition to Savings Plans. Furthermore, you should vary the term lengths of your contracts. Blend low-savings-rate one-year contracts with high-savings-rate three-year contracts to balance flexibility and savings against your planned resource usage.

5. Vary upfront spend amounts across commitments

Commitments can be purchased with all, partial, or no upfront payment, each with a varying discount rate depending on the resource being covered. Typically, all upfront payment provides the highest discount and no upfront payment provides the lowest.  Using only one level of upfront payment across various commitments is often the approach taken and recommended by vendors like AWS. There are limitations to this approach. An all upfront approach for all of your resources can lead to upfront money being spent on commitments with very little incremental savings lift when compared to the no upfront and partial upfront version of those commitments. On the other hand, no upfront spending significantly reduces savings and can leave a lot of potentially great deals on the table, depending on the expected return an organization has for the upfront money it spends.

In addition to blending different commitment types and contract term lengths, using a combination of all, no and partial upfront spending for your commitments can also help with increasing cloud services savings while retaining flexibility (in this case financial flexibility). The combination of different upfront spends should yield incremental savings that match or exceed your organization’s cost of capital. This acts as a good check to make sure budgets are not entirely used inefficiently.

Aran Khanna is the CEO of Archera

Source link

Startups

Airtable chief revenue officer, chief people officer and chief product officer are out • TechCrunch

Published

on

As part of Airtable’s decision to cut 20% of staff, or 254 employees, three executives are “parting ways” with the company as well, a spokesperson confirmed over email. The chief revenue officer, chief people officer and chief product officer are no longer with the company.

Airtable’s chief revenue officer, Seth Shaw, joined in November 2020 just one month before Airtable’s chief producer officer Peter Deng came on board. Airtable’s chief people officer, Johanna Jackman, joined Airtable in May 2021 with an ambitious goal to double the company’s headcount to 1,000 in 12 months. The three executives are departing today as a mutual decision with Airtable, but will advise the company through the next phase of transition, the company says. All three executives were reached out to for further comment and this story will be updated with their responses if given.

An Airtable spokesperson declined to comment on if the executives were offered severance pay. The positions will be succeeded by internal employees, introduced at an all-hands meeting to be held this Friday.

Executive departures at this scale are rare, even if the overall company is going through a heavy round of cuts. But CEO and founder Howie Liu emphasized, in an email sent to staff but seen by TechCrunch, that the decision – Airtable’s first-ever lay off in its decade-long history – was made following Airtable’s choice to pivot to a more “narrowly focused mode of execution.”

In the email, Liu described Airtable’s goal – first unveiled in October – to capture enterprise clients with connected apps. Now, instead of the bottom-up adoption that first fueled Airtable’s rise, the company wants to be more focused in this new direction. Liu’s e-mail indicates that the startup will devote a majority of its resources toward “landing and expanding large enterprise companies with at least 1k FTEs – where our connected apps vision will deliver the most differentiated value.”

The lean mindset comes after Airtable reduced spend in marketing media, real estate, business technology and infrastructure, the e-mail indicates. “In trying to do too many things at once, we have grown our organization at a breakneck pace over the past few years. We will continue to emphasize growth, but do so by investing heavily in the levers that yield the highest growth relative to their cost,” Liu wrote.

Airtable seems to be emphasizing that its reduced spend doesn’t come with less ambition, or ability to execute. A spokesperson added over e-mail that all of Airtable’s funds from its $735 million Series F are “still intact.” They also said that the startup’s enterprise side, which makes up the majority of Airtable’s revenue, is growing more than 100% year over year; the product move today just doubles down on that exact cohort.

Current and former Airtable employees can reach out to Natasha Mascarenhas on Signal, a secure encrypted messaging app, at 925 271 0912. You can also DM her on Twitter, @nmasc_. 



Source link

Continue Reading

Startups

Kubernetes Gateway API reality check: Ingress controller is still needed

Published

on

No doubt the new Kubernetes excitement is the Gateway API. One of the more significant changes in the Kubernetes project, the Gateway API is sorely needed. More granular and robust control over Kubernetes service networking better addresses the growing number of use cases and roles within the cloud-native paradigm.

Shared architecture — at all scales — requires flexible, scalable and extensible means to manage, observe and secure that infrastructure. The Gateway API is designed for those tasks. Once fully matured, it will help developers, SREs, platform teams, architects and CTOs by making Kubernetes infrastructure tooling and governance more modular and less bespoke.

But let’s be sure the hype does not get ahead of today’s needs.

The past and future Kubernetes gateway API

There remains a gap between present and future states of Ingress control in Kubernetes. This has led to a common misconception that the Gateway API will replace the Kubernetes Ingress Controller (KIC) in the near term or make it less useful over the longer term. This view is incorrect for multiple reasons.

Event

Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.


Register Now

Ingress controllers are now embedded in the functional architecture of most Kubernetes deployments. They have become de facto. At some point, the Gateway API will be sufficiently mature to replace all functionality of the Ingress API and even the implementation-specific annotations and custom resources that many of the Ingress implementations use, but that day remains far off.

Today, most IT organizations are still either in the early adoption or the testing stage with Kubernetes. For many, just getting comfortable with the new architecture, networking constructs, and application and service management requirements requires considerable internal education and digestion.

Gateway API and Ingress controllers are not mutually exclusive

As we’ve done at NGINX, other Ingress maintainers will presumably implement the Gateway API in their products to take advantage of the new functionality and stay current with the Kubernetes API and project. Just as RESTful APIs are useful for many tasks, the Kubernetes API underpins many products and services, all built on the foundation of its powerful container orchestration engine.

The Gateway API is designed to be a universal component layer for managing service connectivity and behaviors within Kubernetes. It is expressive and extensible, making it useful for many roles, from DevOps to security to NetOps.

As a team that has invested considerable resources into an open source Ingress controller, NGINX could have chosen to integrate the Gateway API into our existing work. Instead, we elected to leverage the Gateway API as a standalone, more open-ended project. We chose this path so as not to project the existing constraints of our Ingress controller implementation onto ways we might hope to use the Gateway API or NGINX in the future. With fewer constraints, it is easier to fail faster or to explore new designs and concepts. Like most cloud-native technology, the Gateway API construct is designed for loose coupling and modularity ­— even more so than the Ingress controller, in fact.

We are also hopeful that some of our new work around the Gateway API is taken back into the open-source community. We have been present in the Kubernetes community for quite some time and are increasing our open-source efforts around the Gateway API.

It could be interpreted that the evolving API provides an invaluable insertion point and opportunity for a “do-over” on service networking. But that does not mean that everyone is quick to toss out years of investment in other projects. Ingress will continue to be important as Gateway API matures and develops, and the two are not mutually exclusive.

Plan for a hybrid future

Does it sound like we think the Kubernetes world should have its Gateway API cake and eat its Ingress controller too? Well, we do. Guilty as charged. Bottom line: We believe Kubernetes is a big tent with plenty of room for both new constructs and older categories. Improving on existing Ingress controllers —which were tethered to a limited annotation capability that induced complexity and reduced modularity — remains critical for organizations for the foreseeable future.

Yes, the Gateway API will help us improve Ingress controllers and unleash innovation, but it’s an API, not a product category. This new API is not a magic wand nor a silver bullet. Smart teams are planning for this hybrid future, learning about the improvements the Gateway API will bring while continuing to plan around ongoing Ingress controller improvement. The beauty of this hybrid reality is that everyone can run clusters in the way they know and desire. Every team gets what they want and need.

Brian Ehlert is director of product management at NGINX.

Source link

Continue Reading

Startups

4 Ways to Use Social Media for Market Research

Published

on

Opinions expressed by Entrepreneur contributors are their own.

Social media has undoubtedly changed the way brands think about digital marketing. Just a few years ago, networks like Facebook, Instagram and LinkedIn only played a small part in global marketing strategies. But as their user numbers have grown, so has their importance for digital marketing. Today, social media channels offer digital marketers excellent market research opportunities.

How market research sets brands apart

Market research has always been an integral part of building a brand. Conducting market research means gathering information and learning more about your target market, establishing potential customer personas, and evaluating how successful your product could be.

Market research also helps quantify product-market fit. Once your product or service has been launched, research allows brand teams to check whether customers receive the messages they want to communicate.

With a company’s marketing goals, market research forms the foundation of successful brand marketing strategies. In short, it is hard to overstate the importance of market research. Still, there are drawbacks. Traditional market research techniques, such as interviews and focus groups, can be time-consuming. These tools can also be tough on resources if the research is done thoroughly, forcing some brands to launch a marketing strategy built on hunches rather than data. Others limit the scope of their study in the hope that findings may still be valid. Both of these options are putting brands at risk.

Related: The 7 Secrets of Truly Successful Personal Brands

Social media lifts market research limitations

Social media platforms have all the tools necessary to provide brands with answers to market research questions. Social media can offer insights into branding, content messaging and creative design, as well as improve awareness of competitor activity and industry trends.

Much of this is made possible by the sheer number of potential customers brands can access via social media. Facebook alone has nearly three billion active users every month, which has been growing for nearly a decade. Instagram continues to gain ground, with currently around two billion active users.

Social media usage figures are projected to grow for at least the next few years. More than 4.26 billion people spent time on social media in 2021. Statisticians believe that figure will rise to nearly six billion within five years.

But social media can do more than provide user numbers. The companies behind Facebook, Instagram, LinkedIn, and TikTok know a great amount of information about their users, starting with demographics and including lifestyle preferences. These insights enable brands to access the right audience faster than ever before and at lower costs.

Related: In a Crowded Field of Emerging Franchises, Only the Strongest Brands Thrive

How to use social media for market research

Social media channels allow brands to access several layers of information about their industry, the brand itself, competitors, messaging and creative design.

1. Industry insights

Using social media channels is an efficient way to assess industry trends in real-time. Channels like LinkedIn, Facebook and Instagram make it easy to spot and isolate leading trends and changes in those trends. A few years ago, images captured consumer attention. More recently, however, video-based channels like TikTok have cemented the importance of video as a tool to connect with customers. Of course, brand teams can choose to ignore certain trends, but it is still important to understand the drivers behind the industry.

In this context, industry drivers are not only topics or tools. Social media has created a relatively new digital marketing phenomenon — working with influencers. Identifying and working with the right influencers can be a critical driver of business growth.

Before the advent of social media channels, gathering similar information required more time and in-depth analysis simply because the information was not as easily accessible.

2. Competitor research

Social media has made it easier to conduct competitor research. Companies from virtually every industry sector have started embracing social media channels to connect with customers and partners. As a result, it is far easier to understand your competitors’ marketing strategies and analyze which marketing tactics and channels work best for them.

Following a competitor’s social media channels helps brands understand what audiences engage with and which content they ignore. Brand teams gain a deeper insight into the mindset of their competitors’ clients. Following these channels regularly allows you to clearly understand your competitors, their audiences, and their marketing approach.

Related: The Ultimate Guide to Competitive Research for Small Businesses

3. Brand positioning

Are your target audiences perceiving your brand the way you would like to be perceived? Monitoring social media allows your marketing team to answer this question quickly. Hashtags and search functions make it easy to assess how a brand is being discussed without any delay associated with traditional market research methods.

As a result of gaining instant insights, your team can adjust and correct its brand messaging quicker than ever.

4. Content messaging and design

A traditional approach to determining advertising messages might involve A/B testing, among other methods. While these types of market research are important for developing successful (traditional) advertising campaigns, they can be expensive and delay the campaign.

Social media channels allow brands to test their content messaging and design directly with minimal costs. Through likes and comments, brands gain instant customer feedback. Throughout a few posts, it will become clear whether customers are more likely to engage with images, videos or webinars, for example.

If a brand uses social media to generate sales, conversion figures will quickly deliver more tangible insights than A/B testing can. Those insights can immediately be applied to the advertising content, allowing brands to conduct market research and put their findings into practice simultaneously.

Using social media channels for market research lets brands learn about industry trends and competitor activity in real-time. Brand teams can also assess brand perception, messaging and content design without delay, optimizing market research results and overall campaign performance.

Continue Reading

Trending

URGENT: CYBER SECURITY UPDATE