Wednesday, November 11, 2020

Cloud Computing VS Cloud Native

 "Cloud computing” is when CSP runs your server.

“Cloud native” is when CSP runs your code.

Friday, September 4, 2020

A reference architecture for multicloud

 

Data-focused multicloud deals with everything that’s stored inside and outside of the public clouds. Cloud-native databases exist here, as do legacy databases that still remain on-premises. The idea is to manage these systems using common layers, such as management and monitoring, security, and abstraction. 

Service-focused multicloud means that we deal with behavior/services and the data bound to those services from the lower layers of the architecture. It’s pretty much the same general idea as data-focused multicloud, in that we develop and manage services using common layers of technology that span from the clouds back to the enterprise data center.


Saturday, August 22, 2020

When should my company use cloud arbitrage?

 


Organizations can escape vendor lock-in with a cloud arbitrage model. However, it requires an upfront investment and a solid understanding of the limitations of this approach.

As organizations migrate more workloads to the cloud and adopt a hybrid or multi-cloud strategy, they also face concerns about vendor lock-in. One way to address this issue is to practice cloud arbitrage.

In this model, IT teams regularly compare vendor pricing, performance and overall capabilities, then move workloads to the platform that best meets those needs, with the ultimate goal of saving cost. Developers and administrators need to build and manage cloud-agnostic applications for this to succeed. And workload migrations can still be a challenge, depending on the amount dependencies, the differences between clouds and possible egress fees.

However, as budgets tighten, more organizations should consider adopting cloud arbitrage as a valuable tool to keep cloud spending in check.

Cloud arbitrage tool suggestions

As of publication, there's no managed cloud arbitrage tool offering. Customers need to put their own tools in place to have total visibility into their cloud management, which can then be used to support a cloud arbitrage model.

Quite often, this starts with having a cloud management platform that includes a service catalog and the cloud economics tools to track and analyze your cloud spending for trends. Another component for cloud arbitrage is an infrastructure management option that supports provisioning and management across clouds.

For example, you could deploy HashiCorp's Terraform and Nomad orchestration tools to drive multi-cloud provisioning. Hence, the organization always chooses the lowest cost provider and instances.
Cloud arbitrage usually relies on containers. Applications and their dependencies can be packaged inside a container and moved to another environment much more easily than with VMs alone. Expect to see container orchestration platforms like Kubernetes play a major role in cloud arbitrage going forward.

Cloud arbitrage best practices

Organizations that look to implement cloud arbitrage want to deploy workloads on the most cost-effective platform. Yet, in today's highly competitive cloud market, the prices for primary cloud services don't change enough for most offerings to make cloud arbitrage worthwhile.

The most significant savings and pricing fluctuations in the cloud market are for VMs provisioned from excess capacity, such as AWS Spot Instances, Google Cloud Preemptible VM instances and Azure Low Priority Virtual Machines or Spot VMs. There are still discounts available for up to 90% when compared to on-demand instance pricing.

Spot instances are often used for batch analytics jobs or as part of automatic scaling to support spikes in traffic. These VMs can be shut down with a few minutes notice if the provider's system requires the capacity, so they should only be used if your application can tolerate the disruption. If you plan to use these discounted VMs in your cloud arbitrage, make sure your application is built with these characteristics in mind.

While cloud arbitrage is usually associated with multi-cloud, customers don't need to move to an entirely different cloud provider to gain cost savings. Upgrading to a newer instance class of virtual machine can save 15% with minimal downtime or impact to operations.

Alternatively, buying reserved instances for specific use cases, such as workloads that must be on 100% of the time, can save 25% or more for users. This option makes sense for organizations that have the data to ensure they're reserving the right instance class and size. However, you must know the breadth of each cloud provider's offerings -- and how best use them -- for the highest return on investment.

Thursday, July 16, 2020

AWS Lambda VS Azure on Serverless Functions

FeaturesAWS Lambda   AzureRemarks
 PricingAWS Charges additional for the data transfer between Lambda and its storage devices such as S3 and lambda functions - if data moves between cloud regions.

There is NO Fee if the lambda functions and storage devices exists in the same region.

AWS charges high costs for Provisioned concurrency which initalized functions and handle requests more quickly.
Azure does NOT charge for inbound data transfers.
But It does charge for outbound Data Transfer from one data center to another cloud region.


AZURE offers similar features for who signup for premium plan.

Organizations uses multiple cloud regions find azure is more effective. Because it does NOT charge for inboud.
Both cloud providers charge their serverless users based on the amount of memory that their functions consume and the number of times the functions execute.                         
Programming 
Language 
Support
Lambda supports GO and Ruby.

Lambda cusom routines uses BINARY Files to support any langauges.
Azure also supports Javascript and Typescript.

Azure rely on HTTP  primitives to support any langauges.
Serverless Applications can support many languages. Both supports c#, Java, Python  and  Powershell    .

Its possible to support ANY LANGUAGE by using Lambda custom routines or Azure Custom Function handlers.  
            
Deployment ModelsAWS Lambda deploys all functions in the Lambda environment on servers that run Amazon Linux.

Lambda functions can interact with other services on the AWS cloud or elsewhere in a variety of ways, but function deployment is limited to the Lambda service.
Azure Functions users can deploy code directly on the Azure Functions service.

But they can also run software inside Docker containers, which gives programmers more control over the execution environment. Azure Functions works with Dockerfiles that define the container environment.

These functions packaged inside Docker containers can also be deployed to Kubernetes, through an integration with Kubernetes Event-driven Autoscaling.

Azure Functions also offers the option to deploy functions to either Windows or Linux-based servers. In most cases, the host operating system should not make a difference.

However, if your serverless functions have OS-specific code or dependencies, such as a programming language or library that runs only on Linux, this is an important factor.

Azure Functions is more flexible and complex in another area too: how users deploy serverless functions as part of a larger workload.
              

Tuesday, April 21, 2020

Evolution of AWS Lambda

AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). Users of AWS Lambda create functions, self-contained applications written in one of the supported languages and runtimes, and upload them to AWS Lambda, which executes those functions in an efficient and flexible manner.

The Lambda functions can perform any kind of computing task, from serving web pages and processing streams of data to calling APIs and integrating with other AWS services.

The concept of “serverless” computing refers to not needing to maintain your own servers to run these functions. AWS Lambda is a fully managed service that takes care of all the infrastructure for you. And so “serverless” doesn’t mean that there are no servers involved: it just means that the servers, the operating systems, the network layer and the rest of the infrastructure have already been taken care of, so that you can focus on writing application code.

AWS Lambda sparked the rise of serverless computing in the cloud. Explore how the function-as-a-service platform developed over time with this infographic.

AWS Lambda was the first serverless offering of its kind -- built to relieve cloud users of infrastructure management responsibilities and execute code in response to predefined triggers.


Since the launch of this event-driven computing platform more than five years ago, AWS Lambda has become central to Amazon's cloud strategy and helped shape the state of serverless computing.


"The advantage for developers is that they don't have to worry about the hardware used to execute the applications," said Jean Atelsek, an analyst at 451 Research. "For admins, these systems make life easier, and costs lower, because they don't have to provision resources in advance."