Sunday, November 18, 2018

Comparision between Azure VS AWS Vs Google Cloud

Just wanted to put comparision chart between Azure , AWS, Google Cloud Platform, based on my knowledge and experience.
Features Azure AWS Google
COMPUTECompute is based on its Virtual Machines that are attached to other tools such as Resource Manager and Cloud Services to deploy applications on the cloud platform.

Azure users choose Virtual ard Disk (VHD), which is equivalent to a Machine Instance, to create a VM. 

VHD can be pre-configured by Microsoft, the user or a third party. 
The user must specify the amount of cores and memory.
EC2 is the primary offering of AWS in the realm of compute. EC2 provides a wide range of options to facilitate users with customized offerings. Other computing services of AWS contains EC2 Container Service, AWS Autoscaling and Lambda as well as Elastic Beanstalk for app deployment.

AWS EC2 users can configure their own VMs or choose pre-configured machine images, or 

customize MIs. Users choose size, power, memory capacity and number of VMs, and choose from different regions and availability zones with which to launch from.
Google's scalable Compute Engine delivers VMs in Google's data centres. They are quick to boot, come with persistent disk storage, promise consistent
performance and are highly customisable depending on the needs of the customer.
STORAGE Include its core
Azure Storage service, 
Azure Blob block storage, 
as well as 
Table, Queue and File storage.
 It also offers Site Recovery, Import Export and Azure Backup.

Azure offers temporary storage through D drive, block storage through Page Blobs for VMs. 

Block Blobs and Files also serve as object storage. 
Supports relational databases; NoSQL and Big Data through Azure Table and HDInsight. 

Azure also offers site recovery, Import Export and Azure Backup for additional archiving and recovery options.
It includes its
Simple Storage (S3),
Elastic Block Storage (EBS),
Elastic File System (EFS),
Import/Export large volume data transfer service,
Glacier archive backup and Storage Gateway,
which integrates with on-premise environments.

AWS has temporary storage that is allocated once an instance is started and destroyed when the instance is terminated. 

They also provide block storage (same as hard disks), that can be separate or attached to an instance. 

Object storage is offered with S3; and data archiving services with Glacier. 

Fully supports relational and NoSQL databases and Big Data.
NETWORKING Microsoft offers Virtual Network (VNET) that offers users ability to create isolated networks
as well as subnets, route tables, private IP address ranges and network gateways. Both companies

offer solutions to extend the on-premise data center into the cloud and firewall option
Amazon offers Virtual Private Cloud (VPC) so users can create isolated networks within the cloud. Within a VPC, a user can create subnets, route tables, private IP address ranges, and network gateways.Same as other 2 in networking features like load balancing, on premise systems.
DATABASES Supports NoSQL databases, relational databases, and Big Data through HDInsight and Windows Azure Table.Completely supports NoSQL and relational databases as well as Big Data.Supports Relational Databases along with Google Bigtable.
SECURITY Cloud security is divided into five layers such as data, application, host, network, and physical. Azure infrastructure protects the Azure ecosystem from all vulnerabilities. For the user’s data security, Microsoft offers various services such as:
  • Controlling and managing user access and identity
  • Securing Networks
  • Encrypting operation and communication process
  • Managing Threats
Assures users to have increased privacy and more controls at the lower cost. Main benefits of electing AWS are:
  • Keep All Your Data Safe
  • Quick Application/Solution Scalability
  • Meet Compliance Requirements
  • Save Costs
PRICING MODEL Microsoft’s pricing is also pay-as-you-go, but they charge per minute, which provides a more exact pricing model. Azure also offers short term commitments with the option between pre-paid or monthly charges
 Amazon has a pay-as-you-go model, where they charge per hour. Instances are purchasable on the following models:
  1. On demand: Pay for what you use without upfront cost
  2. Reserved: Reserve an instance for 1 or 3 years with upfront cost based on use
  3. Spot: Customers bid for extra capacity available
Integrations 

Open Space
If you are already using windows development tools such as VBS, SQL DB, Active Directory, Azure offers Native integration for these tools.
For example, use the same AD accounts you currently have to sign into office 365 or Azure SQL instances.  Azure also good for .NET developers. 
AWS has better integration with open source community. like jenkins and github. And also friendlier to Linux servers.




Tuesday, October 30, 2018

Microsoft Azure - Overview

Microsoft Azure, formerly known as Windows Azure, is Microsoft's public cloud computing platform.



It provides a range of cloud services, including those for compute, analytics, storage and networking. Users can pick and choose from these services to develop and scale new applications, or run existing applications, in the public cloud.

Microsoft Azure is widely considered both a Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) offering.

Azure products and services

As of July 2018, Microsoft categorizes Azure cloud services into 18 main product types:

Compute -- These services enable a user to deploy and manage virtual machines (VMs), containers and batch processing, as well as support remote application access.

Web -- These services support the development and deployment of web applications, and also offer features for search, content delivery, application programming interface (API) management, notification and reporting.

Data storage -- This category of services provides scalable cloud storage for structured and unstructured data and also supports big data projects, persistent storage (for containers) and archival storage.

Analytics -- These services provide distributed analytics and storage, as well as features for real-time analytics, big data analytics, data lakes, machine learning, business intelligence (BI), internet of things (IoT) data streams and data warehousing.

Networking -- This group includes virtual networks, dedicated connections and gateways, as well as services for traffic management and diagnostics, load balancing, domain name system (DNS) hosting, and network protection against distributed denial-of-service (DDoS) attacks.

Media and content delivery network (CDN) -- These services include on-demand streaming, digital rights protection, encoding and media playback and indexing.

Hybrid integration -- These are services for server backup, site recovery and connecting private and public clouds.

Identity and access management (IAM) -- These offerings ensure only authorized users can access Azure services, and help protect encryption keys and other sensitive information in the cloud. Services include support for Azure Active Directory and multifactor authentication (MFA).

A look at the technology behind the Azure cloud.

Internet of things -- These services help users capture, monitor and analyze IoT data from sensors and other devices. Services include notifications, analytics, monitoring and support for coding and execution.

Development -- These services help application developers share code, test applications and track potential issues. Azure supports a range of application programming languages, including JavaScript, Python, .NET and Node.js. Tools in this category also include support for Visual Studio, software development kits (SDKs) and blockchain.

Security -- These products provide capabilities to identify and respond to cloud security threats, as well as manage encryption keys and other sensitive assets.

Artificial intelligence (AI) and machine learning -- This is a wide range of services that a developer can use to infuse machine learning, AI and cognitive computing capabilities into applications and data sets.

Containers -- These services help an enterprise create, register, orchestrate and manage huge volumes of containers in the Azure cloud, using common platforms such as Docker and Kubernetes.

Databases -- This category includes Database as a Service (DBaaS) offerings for SQL and NoSQL, as well as other database instances, such as Azure Cosmos DB and Azure Database for PostgreSQL. It also includes SQL Data Warehouse support, caching, and hybrid database integration and migration features.

DevOps -- This group provides project and collaboration tools, such as Visual Studio Team Services, that facilitate DevOps software development processes. It also offers features for application diagnostics, DevOps tool integrations, and test labs for build tests and experimentation.

Migration -- This suite of tools helps an organization estimate workload migration costs, and perform the actual migration of workloads from local data centers to the Azure cloud.

Mobile -- These products help a developer build cloud applications for mobile devices, providing notification services, support for back-end tasks, tools for building APIs and the ability to couple geospatial (location) context with data.

Management -- These services provide a range of backup, recovery, compliance, automation, scheduling and monitoring tools that can help a cloud administrator manage an Azure deployment.

Azure for DR and backup

Just as they can with other public cloud platforms, some organizations use Azure for data backup and disaster recovery (DR). In addition, some organizations use Azure as an alternative to their own data center. Rather than invest in local servers and storage, these organizations choose to run some, or all, of their business applications in Azure.

To ensure availability, Microsoft has Azure data centers located around the world. As of July 2018, Microsoft Azure services are available in 54 regions, spread across 140 countries. As not all services are available in all regions, Azure users must ensure that workload and data storage locations comply with all prevailing compliance requirements or other legislation.

Azure pricing and costs

As with other public cloud providers, Azure primarily uses a pay-as-you-go pricing model that charges based on usage. However, if a single application uses multiple Azure services, each service might involve multiple pricing tiers. In addition, if a user makes a long-term commitment to certain services, such as compute instances, Microsoft offers a discounted rate.

Given the many factors involved in cloud service pricing, an organization should review and manage its cloud usage to minimize costs. Azure-native tools, such as Azure Cost Management, can help to monitor, visualize and optimize cloud spend. It's also possible to use third-party tools, such as Cloudability or RightScale, to manage Azure resource usage and associated costs.

Azure competition

Microsoft Azure is one of several major public cloud service providers operating on a large global scale. Other major providers include Google Cloud Platform (GCP), Amazon Web Services (AWS) and IBM.

Currently, there is a lack of standardization among cloud services or capabilities -- meaning no two cloud providers offer the same service in the exact same way, using the same APIs or integrations. This makes it difficult for a business to use more than one public cloud provider to pursue a multi-cloud strategy, although, third-party cloud management tools can reduce some of these challenges.

Azure history

Microsoft first unveiled its plans to introduce a cloud computing service called Windows Azure in 2008. Preview versions of the service became available and matured, leading to its commercial launch in early 2010.

Although early iterations of Azure cloud services fell behind more established offerings, such as AWS, the portfolio continued to evolve and support a larger base of programming languages, frameworks and operating systems (including Linux).

By early 2014, Microsoft recognized that the implications of cloud computing stretched far beyond Windows, and the service was rebranded as Microsoft Azure.

In early 2018, Microsoft acquired Avere Systems to build out Azure's capabilities in high-performance storage with network file systems (NFSes) and server message block (SMB) file-based storage for Linux and Windows systems.

Sunday, October 28, 2018

Saturday, October 27, 2018

Cloud Transformation : Executing the Migration

With CSP selection complete, the organization can now tackle the hard work of executing the actual migration. This task should include:

  • Planning and executing an organizational change management plan.
  • Verifying and clarifying all key stakeholder roles.
  • Detailed project planning and execution.
  • Establishing internal processes for monitoring and periodically reporting the status of all key performance indicators.
  • Establishing an internal cloud migration status feedback and response process.

Cloud Transformation : Smarter Way to Select a Provider

Cloud service provider selection requires a well-developed hybrid IT strategy, an unbiased application portfolio review and the appropriate due diligence in the evaluation of all credible cloud service providers. When discussing this linkage, I leverage the Digital Transformation Layered Triangle as a visualization tool. After agreeing to an appropriate high-level hybrid IT strategy, a digital transformation core tenant, candidate CSPs capabilities must be compared based on their:

  • Availability of technology services that align with the business/mission model.
  • Availability of data security controls that address legal, regulatory and data sovereignty limitations.
  • Compatibility of CSP sales process with enterprise acquisition process.
  • Cost forecast alignment with budgetary expectations.




























Understanding Cloud Service Agreements(CSA)

Comparing cloud service agreements from the remaining viable service providers is next. These agreements typically have three components:
  • Customer Agreement: Describes the overall relationship between the customer and provider. Service management includes the processes and procedures used by the cloud provider. Thus, it’s crucial to provide definitions of the roles, responsibilities and execution of the processes. The customer agreement does this. This document can be called a “master agreement,” “terms of service” or simply “agreement.”
  • Acceptable Use Policy (AUP): Defines activities that the provider considers to be improper or outright illegal. There is considerable consistency across cloud providers in these documents. While specific details may vary, the scope and effect of these policies remain the same, and these provisions typically generate the least concerns or resistance.
  • Service-Level Agreement (SLA): Describes levels of service by in terms of availability, serviceability or performance. The SLA specifies thresholds and financial penalties associated with violations of these thresholds. Well-designed SLAs can avoid conflict and facilitate the resolution of an issue before it escalates into a dispute.

Designing a CSA Evaluation

The CSA Evaluation must take into account all critical functional and nonfunctional organizational requirements and IT governance policies, to ensure:
  • Mutual understanding of roles and responsibilities.
  • Compatibility with all enterprise business level policies.
  • An identifiable metrics for all critical performance objectives.
  • Agreement on a plan for meeting all data security and privacy requirements.
  • Identified service management points of contact for each critical technology services.
  • Agreement on service failure management process.
  • Agreement on disaster recovery plan process.
  • An approved hybrid IT governance process.
  • Agreement on a CSP exit process.

Cloud Transformation : Application portfolio analysis

An application portfolio screening process includes:
  • The most appropriate CSP(Cloud Service Provider) target deployment environment.
  • Each application’s specific business benefits, key performance metrics and target return on investment.
  • Each application’s readiness for cloud migration.

Build a foundation

The first step in the screening process is determining the most appropriate cloud deployment environment. This practice establishes an operational foundation for subsequent service provider selections by using relevant stakeholder goals and organizational constraints to guide service model, deployment model and implementation option strategy decisions. Enterprises transforming their information technology should evaluate all available options by analyzing an app transition across three specific high-level domains and sub-domains, such as:
  • IT implementation model
    • Traditional
    • Managed service provider
    • Cloud service provider
  • Technology service model
    • Infrastructure-as-a-Service
    • Platform-as-a-Service
    • Software-as-a-Service
  • IT infrastructure deployment model
    • Private
    • Hybrid
    • Community
    • Public

Cloud computing domains

These domains and sub-domains outline a structured decision process for placing the right application workload into the most appropriate IT environment. This is not a static decision: As business goals, technology options and economic models changes, the relative value of these combinations to your organization may change as well. Plus, single-point solutions are rarely sufficient to meet all enterprise needs. By the end of the cloud migration journey, an organization may require a mix of two, three or as many as 10 variations. Infrastructure variation is why an organizational hybrid IT adoption strategy is crucial. 
Figure 1 is an example application decision matrix suitable for this step.











With target deployment environments selected, companies should evaluate each candidate application regarding their business benefits and ability to leverage cloud computing’s technical and operational advantages. Using a simple qualitative scale, stakeholders should agree on:
  • Key performance indicators relevant to business or mission owner goals.
  • Expected or target financial return on investment.
  • Each application’s ability to use cloud infrastructure scalability to:
    • Optimize time to deliver products or services.
    • Reduce time from business decision to execution.
    • Optimize cost associated with IT resource capacity.
    • Increase speed of cost reduction.
  • Possible application performance improvements that may include:
    • More predictable deployment and operational costs.
    • Improved resource utilization.
    • Quantifiable service level metrics.
  • Value delivered by improved user availability that may be indicated by:
    • Improved customer experience.
    • Implementation of intelligent automation.
    • Improved revenue margin.
    • Enhanced market disruption.
  • Enhancing application reliability by:
    • Establishing enforceable service level agreements.
    • Increasing revenue efficiencies.
    • Optimizing profit margin.

Determine KPIs

Below Diagram provides a baseline KPI and ROI model that can be easily modified to effectively manage a qualitative assessment across time, cost, quality and revenue margin criteria.

The final step of this application screening process is determining each application’s readiness to actually migrate to the cloud. This step should qualitatively assess the alignment of an application’s cloud migration decision to the organization’s:
  • Risk appetite and risk mitigation options.
  • Ability to implement, manage and monitor data security controls.
  • Expected migration timelines.
  • Expected ROI realization timelines.
  • Current culture and necessary organizational change management resources.
Performing an application portfolio screening process can be useful in aligning cloud application migration projects with organizational business, technical, security and operational goals. It can also avoid application migration delays, failed business goals and team disillusionment by building and monitoring stakeholder consensus.

Sunday, October 21, 2018

Cloud Transformation : Classify your Data

Security evolves with cloud

Cloud computing has done more than change the way enterprises consume information technology. It’s also changing how organizations need to protect their data. Some may see this as an unintended consequence, but the headlong rush to save money by migrating applications to the cloud has uncovered long-hidden application security issues. This revelation is mostly due to the widespread adoption of “lift and shift” as a cloud migration strategy. Using this option typically precludes any modifications of the migrating application. It can also result in the elimination of essential data security controls and lead to grave data breaches.

Manage deployment

Today, the cloud has quickly become the preferred deployment environment for enterprise applications. This shift to using other people’s infrastructure has brought with it tremendous variability in the nature and quality of infrastructure-based data security controls. It is also forcing companies to shift away from infrastructure-centric security to data-centric information security models. Expanding international electronic commerce, ever tightening national data sovereignty laws and regional data protection and privacy regulations such as GDPR. These issues have combined to make many data classification schema untenable. Cloud Security Alliance and the International Information Systems Security Certification Consortium (ISC2) both suggest that corporate data may need to be classified across at least eight categories, namely:
  • Data type
  • Jurisdiction and other legal constraints
  • Context
  • Ownership
  • Contractual or business constraints
  • Trust levels and source of origin
  • Value, sensitivity and criticality
  • The obligation for retention and preservation

Classify data

Moving to classify data at this level means that one of the most important initial steps of any cloud computing migration must be a review and possible reclassification of all organizational data. By bypassing this step, newly migrated applications simply become data breaches in wait. At a minimum, an enterprise should:
  • Document all key business processes destined for cloud migration.
  • Identify all data types associated with each migrating business process.
  • Explicitly assign the role of process data owner.
  • Assign each process data owner the task of setting and documenting the minimum required security controls for each data type.

Update policies

After completing these steps, companies should review and update their IT governance process to reflect any required expansion of their corporate data classification model.

These steps are also aligned with the ISO 27034-1 framework for implementing cloud application security.

This standard explicitly takes a process approach to specifying, designing, developing, testing, implementing and maintaining security functions and controls in application systems.

It defines application security not as the state of security of an application system but as a process to apply controls and measurements to applications in order to manage the risk of using them.

Cloud Transformation

Business is all about efficiency and effectiveness.  In today’s world, however, those twin goals almost always lead to the cloud. Cloud Transformation is a Journey which you have to take in the modern IT world.

Cloud Experiences should include

  • How to understand and classify business critical data;
  • Executing an efficient process for screening and selecting workloads and applications for cloud migration;
  • Following a methodology for discovering the most effective strategy for each workload and application migration; and
  • Seamless alignment to client’s multi-cloud strategy
Experience has also shown that businesses are in different stages of their “Journey to the Cloud.”  These initial stages often include:

  • Planning and designing common foundational infrastructure services;
  • Pattern and Template based automated deployments for public clouds;
  • Migrating workloads and applications to the most appropriate cloud through a standardized, repeatable tool driven framework;
  • Monitor and Manage workloads using standardized tools and process aligned to cloud platforms; and
  • Governing, tracking, managing and optimizing cloud usage and spend.
1. Classifying and Organizational Data

 - Covers the identification of key business processes and their associated data types.  
 - Outlines the importance of identifying process data owners 
 - And the required security controls for each data type.

2. Application Screening

 -  Looks at determining the most appropriate target deployment environment, 
 -  Looks at each application’s business benefit, 
 -  Looks at key performance indicator options and target return on investment.  
 -  That segment also shows how to select the most appropriate migration strategy for each application.  

 3. Executing the Migration

- Includes selecting the most appropriate cloud service provider and technology services, 
- Reviewing and verifying available data security controls and suggested steps for SLA negotiations.  - Also addresses business/mission model alignment, organizational change management, and migration project planning.

Saturday, June 30, 2018

AGILE Transformation - II

AGILE Metrics



AGILE Assessment





Define the Improvement Points



Select the Framework


Different Types of framework available in AGILE model.

1. SAFe (Scaled Agile Framework)
2. LeSS (Large enterprise Scalable Scrum)
3. DAD (Disciplied Agile Delivery)

LeSS literally scales up the activities in scrum, applying them at the team-of-teams level. In LeSS, large-scale planning takes one or two members from each team to form a second meeting; there is a daily standup that does the same as the daily scrum. The “overall retrospective,” which happens the week after the end a sprint, likewise pulls representatives from each team to discuss large program issues. On top of these, LeSS also adds open space, town hall meetings and other coordination and communication activities.






Implement AGILE in the rest of the Organization


Once the pilot is completed. Gradually it can spread over to rest of the organization.

AGILE Transformation - I

AGILE Strategy




Strategic Planning is important for AGILE Model.  Organization has to take the steps to change and adopt into AGILE model rather than Traditional Model. Most importantly, it should align with Business Goals. Rather than Technology specific Teams.

Training




AGILE Training is important for the entire organization. Because the mindset of team has to get changed. This would be first step to change their mind. And understand the real benefit of AGILE.
Training is important in all the levels like Business, IT and support organization too.

Pilot Projects and Introduce AGILE Model



Select the Right projects for AGILE pilot.  We should see the parameters of criticality, complex, duration , business readiness etc. Most important things, select the project(s) which can be made into small products. And select projects from different portfolios. For example, if its bank, select the project from Payments, Cash Services, Asset Management, Risk and Finance, Lending etc.



Define the AGILE Roles in the projects


Though Traditional Model has Program Manager, Project Manager, Team Lead, Developers and Testers, Its important to define the AGILE roles before start the AGILE way of working. Product owner should be from Business Team and also directly facing the client(s) too.  Scrum Masters is depends upon the team. Team Members can be Developers, Testers. But all should be multifunctional person.



Define the Product Backlog with Product Owner



Select the SCRUM Masters

Scrum Masters can be dedicated for the Team. OR can be shared with other SCRUM Teams. Scrum Master also can be from the team itself depends on the maturity.



Sprint Duration

Duration of the sprint, can be vary from 1 week to 4 weeks. But preferable duration is 2 or 3 weeks. Here's the reasons.




User Stories




AGILE Tools

JIRA is one of the preferable tool in the AGILE Model. Its widely used in across all the organization. Microsoft TFS also being used in few organizations.

AGILE Transformation

Few Important Steps to define AGILE Transformation for any organization.

  1.  Define a AGILE Strategy
  2. Train the Associates
  3. Select the projects which can get into PILOT.
  4. Introduce AGILE into the selected projects.
  5. Define the AGILE roles in the selected projects.
  6. Define the Product Backlog along with product owner.
  7. Select the dedicated Scrum Masters (For Initial Period).
  8. Define the Sprint Duration
  9. Define the User Stories
  10. Select the AGILE Tools like JIRA, TFS
  11. Define the AGILE Metrics which can be followed in the AGILE Model
  12. Periodically do the AGILE Assessment with Framework (which is available in the market)
  13. Based on the AGILE Assessment score and Improve the points.
  14. After its reached the preferable score. Implement the AGILE Model in the rest of the organization
  15. Select the Best Framework for the Organization. Like SAFe, LeSS etc.