What is the cloud computing?
“An emerging computer paradigm where data and services reside in massively scalable data centers in the cloud and can be accessed from any connected devices over the internet”...
Cloud computing is an emerging paradigm in the computer industry where the computing is moved to a cloud of computers. It has become one of the buzz words of the industry. The core concept of cloud computing is, quite simply, that the vast computing resources that we need will reside somewhere out there in the cloud of computers and we’ll connect to them and use them as and when needed.
Computing can be described as any activity of using and/or developing computer hardware and software. It includes everything that sits in the bottom layer, i.e. everything from raw compute power to storage capabilities. Cloud computing ties together all these entities and delivers them as a single integrated entity under its own sophisticated management.
The basis of cloud computing is to create a set of virtual servers on the available vast resource pool and give it to the clients. Any web enabled device can be used to access the resources through the virtual servers. Based on the computing needs of the client, the infrastructure allotted to the client can be scaled up or down.
Characteristics of Cloud Computing
1. Self Healing
Any application or any service running in a cloud computing environment has the property of self healing. In case of failure of the application, there is always a hot backup of the application ready to take over without disruption. There are multiple copies of the same application - each copy updating itself regularly so that at times of failure there is at least one copy of the application which can take over without even the slightest change in its running state.
2. Multi-tenancy
With cloud computing, any application supports multi-tenancy - that is multiple tenants at the same instant of time. The system allows several customers to share the infrastructure allotted to them without any of them being aware of the sharing. This is done by virtualizing the servers on the available machine pool and then allotting the servers to multiple users. This is done in such a way that the privacy of the users or the security of their data is not compromised.
3. Linearly Scalable
Cloud computing services are linearly scalable. The system is able to break down the workloads into pieces and service it across the infrastructure. An exact idea of linear scalability can be obtained from the fact that if one server is able to process say 1000 transactions per second, then two servers can process 2000 transactions per second.
4. Service-oriented
Cloud computing systems are all service oriented - i.e. the systems are such that they are created out of other discrete services. Many such discrete services which are independent of each other are combined together to form this service. This allows re-use of the different services that are available and that are being created. Using the services that were just created, other such services can be created.
5. SLA Driven
Usually businesses have agreements on the amount of services. Scalability and availability issues cause clients to break these agreements. But cloud computing services are SLA driven such that when the system experiences peaks of load, it will automatically adjust itself so as to comply with the service-level agreements. The services will create additional instances of the applications on more servers so that the load can be easily managed.
6. Virtualized
The applications in cloud computing are fully decoupled from the underlying hardware. The cloud computing environment is a fully virtualized environment.
7. Flexible
Another feature of the cloud computing services is that they are flexible. They can be used to serve a large variety of workload types - varying from small loads of a small consumer application to very heavy loads of a commercial application.
.................NOTE:.............
An organization will always explore the most efficient offering to deploy a product to consumers. PaaS solutions are lightweight on initial setup, as a team can release the code in production within days.
However, there are use cases where customers interact with a service only once a day or for a couple of hours within a day. For example, a service to update a timetable with the new bus schedule once a day. In this case, using a PaaS offering has one major downside: it is not cost-efficient. For example, with Cloud Foundry there will always be an instance of the application up and running, even if the service is used once a day. However, the team is billed for a full day.
☆》What is Function As a Service " FaaS"
For this scenario, a FaaS or Function as a Service is a more suitable offering. FaaS is an event-driven cloud-computing service that allows the execution of code without any management of the infrastructure and configuration files. As a result, the timetable update service is invoked only once a day, and for the rest of the time, there are no replicas of this service. A team will be billed only for the time the service is executed.
Popular FaaS solutions are AWS Lambda, Azure Functions, Cloud Functions from GCP, and many more.
Throughout the release process, a FaaS solution only requires the application code that is built and executed immediately. In comparison with a PaaS offering, this FaaS has a quicker usability rate, as no data management or configuration files are necessary..
$$$$$$$$$$$$$
Cloud Foundry Vs FaaS
Throughout its evolution, an organization needs to periodically evaluate available cloud-computing services, to ensure that the business requirements are always fulfilled. The industry has an abundance of cloud-computing offerings, such as on-premise, IaaS, PaaS, and FaaS, with a rich collection of open-source and vendor managed tools. Cloud Foundry, an open-source PaaS, that can be hosted on any available compute and provide a unified and powerful end-user experience. FaaS an event-driven offering, that increases the cost-efficiency of a platform.
Types of Cloud
- On-premise - cloud-computing service, where a team owns the entire technology stack.
- IaaS or Infrastructure as a Service - cloud-computing service that offers the abstraction of networking, storage, server, and virtualization layers.
- PaaS or Platform as a Service - cloud-computing service, where the infrastructure components are managed fully by a 3rd party provider, and a team manages only the application and the data associated with it.
- Cloud Foundry - an open-source PaaS offering, that can be hosted on any available infrastructure
- FaaS or Function as a Service - event-driven cloud-computing service that requires only the application code to execute successfully.
The new changes should traverse the following stages:
- Build - compile the application source code and its dependencies. If this stage fails the developer should address it immediately as there might be missing dependencies or errors in the code.
- Test - run a suite of tests, such as unit testing, integration, UI, smoke, or security tests. These tests aim to validate the behavior of the code. If this stage fails, then developers must correct it to prevent dysfunctional code from reaching the end-users.
- Package - create an executable that contains the latest code and its dependencies. This is a runnable instance of the application that can be deployed to end-users.
- Deploy - push the packaged application to one or more environments, such as sandbox, staging, and production. Usually, the sandbox and staging deployments are automatic, and the production deployment requires engineering validation and triggering
It is common practice to push an application through multiple environments before it reaches the end-users. Usually, these are categorized as follows:
- Sandbox - development environment, where new changes can be tested with minimal risk.
- Staging - an environment identical to production, and where a release can be simulated without affecting the end-user experience.
- Production - customer-facing environment and any changes in this environment will affect the customer experience.
☆》 Delivery of pipeline consists of two phases:
- Continuous Integration (or CI) includes the build, test, and package stages.
- Continuous Delivery (or CD) handles the deploy stage.
Advantages of a CI/CD pipeline
- Frequent releases - automation enables engineers to ship new code as soon as it's available and improves responsiveness to customer feedback.
- Less risk - automation of releases eliminates the need for manual intervention and configuration.
- Developer productivity - a structured release process allows every product to be released independently of other components
GitHub Actions are event-driven workflows that can be executed when a new commit is available, on external or scheduled events. These can be easily integrated within any repository and provide immediate feedback if a new commit passes the quality check. Additionally, GitHub Actions are supported for multiple programming languages and can offer tailored notifications (e.g. in Slack) and status badges for a project. A GitHub Action consists of one or more jobs. A job contains a sequence of steps that execute standalone commands, known as actions. When an event occurs, the GitHub Action is triggered and executes the sequence of commands to perform an operation, such as code build or test.
The process of propagating an application through multiple environments, until it reached the end-users, is known as the Continuous Delivery (or CD) stage. It is common practice to push the code through at least 3 environments: sandbox, staging, and production.
A reminder of each environment's purpose :
• Sandbox - development environment, where new changes can be tested with minimal risk.
• Staging - an environment identical to production, and where a release can be simulated without affecting the end-user experience.
• Production - customer-facing environment and any changes in this environment will affect the customer experience. The sandbox and staging environments are fully automated.
As such, if the deployment to sandbox is successful and meets the expected behavior, then the code will be propagated to the staging automatically. However, the push to production requires engineering validation and triggering, as this is the environment that the end-users will interact with. The production deployment can be fully automated, however, doing so implies a high confidence rate that the code will not introduce customer-facing disruptions.
What is Google File System :
Google File System (GFS) is a scalable distributed file system developed by Google for data intensive applications. It is designed to provide efficient, reliable access to data using large clusters of commodity hardware. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. Files are divided into chunks of 64 megabytes, which are only extremely rarely overwritten, or shrunk; files are usually appended to or read. It is also designed and optimized to run on computing clusters, the nodes of which consist of cheap.