DAY 1: Monday 13th May 2018, 1:17 PM

After getting so many unsuccessful attempts in IT Support and Customer support the only thing that could get me hired is Cloud Engineer as it requires less customer commuicationa and probably no finnish language required for many companies. It is very new certificationa and cloud itself is being popular recently so it is new role.

I was planning to start early but couldn’t get up early due to tireness from yeasterdays work. So, finally i thought to start by going through youtube videos for the general introductions to the topic.



Cloud computing metaphor: the group of networked elements providing services need not be individually addressed or managed by users; instead, the entire provider-managed suite of hardware and software can be thought of as an amorphous cloud.

Cloud computing is an information technology (IT) paradigm that enables ubiquitous access to shared pools of configurable system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a public utility.

This one is from Harvard, day 10 video

I put this video from day 7 because it has very helpful insight into cloud computing

Three main ways  to cloud compute are(service models)

  1. IaaS – Infrastructure as a service eg GCP, AWS, Azure,
  2. PaaS – Platform as a service eg App-engine,, Azure
  3. SaaS – Software as a Service eg Gmail, GDrive, Salesforce, Dropbox,Office 365

There are three ways to deploy cloud(cloud deployment models)

  1. Private cloud
  2. Public cloud
  3. Hybrid cloud


Virtualization: Is the process of implementing the software or operating system on top of another operating system. Eg hypervisor can be used to installed several other  OS on top of Hardware. Winbox is a windows application that can be used to install other OS including linux.

Image result for types of virtualization
Related image
                                                                                                             #cloud computing

Feel tired now, lets continue tommorow.

DAY 2: 5/14/2018 12:15 PM

Yesterday couldn’t sleep properly so can’t start early. Now after working 3 hr  shift and having lunch i am ready to continue with youtube.

Image result for alison training

I doesed off for 2 hours going through the video.Now i am back.Couldn’t progress much but did finish this course.

Alison_ Introduction To Mobile And Cloud Computing

DAY 3: 5/15/2018, Tuesday 9:08 PM

Today was very busy day. I got up early in morning for 6-9 AM work then went back to the room to have lunch then again to office for brief meeting, then to Google cloud onboard which is related to the course i am doing. I hoped to gain good knowledge from training but was not as i expected. Nothing is as effective as learnig alone peacefully in my own room. After cloud i thought to go back to office as there was python session starting today. Today i just installed python nothing else.So, overall day was not fruitfull.


Even though tommorow will also be busy day i hope to learn more than today.

DAY 4: 5/16/2018 , Wednesday

Today was disaster, not a single progress. Yesterday i stayed late for searchig web and couldn’t conclude on anything. Had to wakeup early with incomplete sleep. Then i had to attend meeting in office but i couldn’t stay longer in office so i returned home earlier. Then i had some food and decided to sleep for a while. When i woke up i looked into my watch to find it is already 5 pm. Lesson of the day ” always sleep on time and wake on time, it makes life easier.”

DAY 5:5/17/2018, Thursday

After finishing work early morning, i am ready now after having lunch.Lets go!

Day 5

After dozing off 2 hours i am back to start linkedin course in cloud computing.


Today i thought to try some other video service for searching cloud computing materials and found this one.


I don’t know why the presenter is very negative in a way about cloud computing. Real time computation may be challenges if we shift our systems in cloud but all systems doesn’t need to be real time connected to cloud.

Now i feel tired to continue and stop for today. I feel like i am spending too much time on cloud computing rather than GCP as it is more important.Lets see how this weekend goes.

Day 6:Date: 5/18/2018, Friday

From quite a few days it feels like i am stuck in something and cannot get out of it. Frustation is building up even though i try to focus in studies. Finland is not the easy place to pursue what you dream of, beleive me i have gone through the hard way to beleive in what i want to do. Working in 3 places just to be in track with your dream is hard and frustation.

Alison course :

Image result for alison training
network security


Too much of security talks. Now lets move to youtube videos.

For sometime i need different content so i did this.


Couldn’t finish the edX course. It is more of azure course so i left it on day 10.

Day 7:Date : 5/19/2018, Saturday, 11:00 am

Fine saturday, its sunny outside and i am as usual in laptop.Sometimes i just want to run free in sunny day but this course is binding me into this laptop. Oh ! i forgot to mention i bought new laptop , don’t ask how much it cost, its is Thinkpad.


So, today i started with videos, lets finish wahtever there is in vimeo. (must see)

Multi-tenancy : instant Update less effort.Consistent interface, less work for developers

sms  I didn’t know till now google had such sms service priviously.

Software-defined networking (SDN) technology is a novel approach to cloud computing that facilitates network management and enables programmatically efficient network configuration in order to improve network performance and monitoring. SDN is meant to address the fact that the static architecture of traditional networks is decentralized and complex while current networks require more flexibility and easy troubleshooting. SDN suggests to centralize network intelligence in one network component by disassociating the forwarding process of network packets (Data Plane) from the routing process (Control plane). The control plane consists of one or more controllers which are considered as the brain of SDN network where the whole intelligence is incorporated. However, the intelligence centralization has its own drawbacks when it comes to security,scalability and elasticity and this is the main issue of SDN.

Now i am tired, lets see one film and sleep for today. Tommorow will continue again.

Total study hour : 8 hours : New record

DAY 8: Date : 5/20/2018 10:30 AM Sunday

Google has so many IP addresses that it is not possible to learn by heart all the IP’s so the solution to find is

If you wish to query this information anytime yourself, you can do it querying the Google DNS servers like so:

nslookup -q=TXT

nslookup -q=TXT

nslookup -q=TXT

nslookup -q=TXT

I completed the Foundation today but due to some issue two labs didn;t show completed. i contacted support but couldn’t finalize it.

Its 7:34 PM now and i am tired ,


It’s record today : 9 Hours continious !

even though not much to show here.

DAY 9: Date : 5/21/2018, Monday, 6:17 pm

Today i couldn’t study as planned, need to be at office for sometime but got lost in wordpress issue and couldn’t even watch videos properly. Downloaded some e-books to read but don’t know when will i read it. So, now thought to go through vimeo and finish dailymotion videos today.

Metacafe has no content on cloud computing. Metacafe : Finished

Want to appear exam faster but still reamininng items are huge!

Confused and tired.

DAY 10: Date : 5/22/2018, 10:51 AM Tuesday

Today i woke up late for my morning work, finished work and hurried back to room for lunch. Then again hurried back to office for meeting and some work. Now i am free to look at youtube videos.

So, Done for Today. Lets see how much i can cover tommorow.

Day 11: 5/23/2018 Wednesday 

Nothing done yesterday, whole day was wasted in attending TEK Event and going to office. Did some office work but no progress in this side.

Day 12: 5/24/2018 Thursday 11:31 AM

Today i went to work in morning as usual and then back to home and now starting my videos again. Today i will try to finish basic part and get into main topics for certification.




LOAD BALANCING: It is a way in which traffic are redirected to several different servers depending on whether the servers are over capacity or load balancer has specific rule to send traffic to random servers in the cluster of servers.

DAY 13: Date: 5/25/2018 Friday 10:02 AM

I started early today and will try to finish some remaining videos and courses in internet for cloud computing and GCP. Lets see how it goes.

Cloud cert


Today it didn’t went well like most of these week. I got lazy and doze off. Today i am most probably going to frineds room for weekend. My previous plan to finish learning with in 30th May will not work for sure.Lets study throughlly and appear in exam than appearing just to save 50 euros. Google have this scheme for some time. Now i will not study.

DAY 14 and 15 : Date 5/(26,27)/2018 Saturday,Sunday 

For 2 days i went to my friends city Riihimaki for his treat so couldn’t progress.

Day 16 : Date : 5/28/2018, Monday, 11:15 AM

After finishing morning work and Lunch i am ready to continue with the study. Today i plan to finish at least one full course. lets see how it goes.

Day 17: Date 5/29/2018 2:03 PM, Tuesday



Day 18 : Date 4th July 2018, wednesday.

After a long break of around a month , again focusing on cloud and feeling like i forgot about the study totally. Need to rewind a bit before going forward and this blog is the right way to do that and luckily i have record of everything i did before so it is even easier to rewind.

Day 19 : Date 9th July 2018, Monday

Need to review more. I will focus shortly on cloud com[puting then move on to google cloud platforms.

Diving into Infrastructure as a Service IaaS, the most straightforward of the cloud delivery models, is the delivery of computing resources in the form of virtualized operating systems, workload management software, hardware, networking, and storage services. It may also include the delivery of operating systems and virtualization technology to manage the resources.
IaaS provides compute power and storage services on demand. Instead of buying and installing the required resources in their traditional data center, companies rent
these required resources as needed. This rental model can be implemented behind a company’s firewall or through a third-party service provider.

Considering a private IaaS
A company would choose a private IaaS over a public one for
three compelling reasons:
✓ The company needs to control access because of security
✓ The company may require that business critical
applications demonstrate predictable performance
while minimizing risk.
✓ The company sees itself as a service provider to its
customers and partners.

Benefits of IAAS

✓ Flexibility to dynamically scale the environment to meet
their needs
✓ Reduction in the need to build new IT infrastructure
because of increase in demands for resources
✓ Cost savings from eliminating capital expenditures on
large systems that may be underutilized much of the year
✓ Almost limitless storage and compute power

Exploring PaaS
PaaS is another foundational service that provides an abstracted and integrated environment for the development, running and management of applications. Often the PaaS is tightly integrated with IaaS services because it’s utilizing the underlying infrastructure provided by the IaaS.A primary benefit of a PaaS environment is that developers don’t have to be concerned with some of the lower-level details of the environment.

Day 20: Date 11th July 2018, Wednesday

Exploring PaaS
PaaS is another foundational service that provides an
abstracted and integrated environment for the development,
running, and management of applications. Often the PaaS is
tightly integrated with IaaS services because it’s utilizing the
underlying infrastructure provided by the IaaS.
A primary benefit of a PaaS environment is that developers
don’t have to be concerned with some of the lower-level
details of the environment.

Understanding the benefits of PaaS
Organizations can gain a few different benefits through a PaaS
environment. For example, it’s possible to architect a private
cloud environment so development and deployment services
are integrated into the platform. This provides a similar
benefit gained from a public PaaS but in a private environment.
A private PaaS implementation can be designed to work in
concert with public PaaS services.
The benefits to using PaaS include the following:
✓ Improving the development life cycle: Effectively
managing the application development life cycle can
be challenging. For example, teams may be in different
locations, with different objectives, and working on
different platforms. When it comes time to integrate, test,
and build the application, problems can arise because
developers are working on different platforms with a
different configuration than the operations team is
working on. In another situation, some developers
don’t have the latest version of the code. These same
developers may also be using a different set of tools. A
key benefit of an abstracted platform is that it supports
the life cycle of the application.
✓ Eliminating the installation and operational burden
from an organization: Traditionally, when a new
application server or other middleware is introduced
into an organization, IT must make sure that the
middleware can access other services that are required
to run that application. This requirement can cause
friction between Development and Operations. With
PaaS, these conflicts are minimized. Because the PaaS
environment is designed in a modular, service-oriented
These materials are the copyright of John Wiley & Sons, Inc. and any
dissemination, distribution, or unauthorized use is strictly prohibited.
Chapter 2: Digging Deeper into IaaS and PaaS 23
manner, components can be easily and automatically
updated. When PaaS is provided by a third-party
organization, those changes occur automatically
without the user having to deal with the details.
When PaaS is implemented in a private cloud, the IT
organization can automate the process of updating a
self-service interface to provision the most current
services to the IT organization.
✓ Implementing standardization: PaaS enables development
professionals and IT operations professionals to use the
same services on the same platform. This approach takes
away much of the misunderstanding that happens when
the two teams with different responsibilities aren’t in
✓ Having ease of service provisioning: A PaaS provides
easy provisioning of development services including
build, test, and repository services to help eliminate
bottlenecks associated with non-standard environments.
This in turn improves efficiency, reduces errors,
and ensures consistency in the management of the
development life cycle. Additionally, PaaS provides
ease of provisioning in runtime services that include
application runtime containers for staging, and running
and scaling applications.

Many companies today are expanding into cloud
computing as a way to reduce the cost and complexity
of delivering traditional IT services. But determining the best
mix of public and private cloud services and data center
services is complicated.

Although IT has made the data center more efficient,
organizations are taking a hard look at what workloads the
centralized data center is well suited for. The reality is that
the traditional data center is often best suited for a complex
line of business applications. These applications are often
These materials are the copyright of John Wiley & Sons, Inc. and any
dissemination, distribution, or unauthorized use is strictly prohibited.
transaction-intensive and need to confirm and track the
movement of financial transactions among customers,
suppliers, and partners. Additionally, large, often highly
customized systems of record are and will continue to be
data center-based. These applications are typically tightly
managed for corporate governance and compliance.

Exploring the Costs
When you’re looking at the right balance of public cloud,
private cloud, and data centers services, you have to take a
step back and look at the overall costs of every environment.
Start by understanding what it costs you to operate your
data center. To do this, look at both direct and indirect costs
related to the application or type of workload you want to
move to the cloud. Some of these indirect costs are hard to
evaluate, making it difficult to accurately predict the actual
costs of running any given application in your company.
Here is a fairly comprehensive list of possible costs:
✓ Server costs: With this and all other hardware components,
you’re specifically interested in the total annual cost of
ownership, which normally consists of the cost of hardware
support plus some amortization cost for the purchase of
the hardware. Additionally, a particular server may be
used to support several different workloads. The more
disparate workloads a server manages, the higher the
support costs.
✓ Storage costs: What are the management and support
costs for the storage hardware required for the data
associated with this application? Storage costs may be
very high for certain types of applications, such as e-mail
or complex analytics.
✓ Network costs: When a web application you host internally,
such as e-mail or collaboration, is moved to the cloud,
this may reduce strain on your network. However, it can
substantially increase bandwidth requirements.
These materials are the copyright of John Wiley & Sons, Inc. and any
dissemination, distribution, or unauthorized use is strictly prohibited.
30 Cloud Services For Dummies, IBM Limited Edition
✓ Backup and archive costs: The actual savings on backup
costs depends on what the backup strategy is when the
application moves into the cloud. The same is true of
archiving. First, you have to understand who’s doing the
backup and archiving. Is backup the responsibility of the
IT organization or is it handled by the service provider?
Will all backup be done in the cloud? If so, do you have
a contingency plan if that cloud service is unavailable
when you need that backup? Will your organization still be
required to back up a percentage of critical data locally?
✓ Disaster recovery costs: In theory, the cloud service has
its own disaster recovery capabilities, so there may be
a consequential savings on disaster recovery. However,
you need to clearly understand what your cloud provider’s
disaster recovery capability is. For example, does the
cloud service provider have mirrored sites in case of a
power outage at one data center location? IT management
must determine the level of support the cloud provider
will offer. This can be an added cost from the provider,
or you may seek out a secondary vendor to handle
disaster recovery and procedures.
✓ Data center infrastructure costs: A whole series of
costs — including electricity, floor space, cooling, and
building maintenance — are an integral part of managing
any data center. Because of the large investment in data
centers, moving workloads to a public cloud may not
be financially viable if you’re only utilizing as little as
40 percent of the data center’s compute power. (Of course,
you can deploy a private cloud to take advantage of the
underutilized space and the advantages of the cloud.)
✓ Software maintenance costs: What’s the annual
maintenance cost for the software you may move to a
cloud-based service? The answer can be complicated if
the software license is part of a bundle or if the application
is integrated with other applications. In addition, there’s
the cost of purchasing the software. Is the organization
taking advantage of a “pay-as-you go” licensing model
that allows the user to pay only for what’s used?
✓ Platform costs: Some applications run only on specific
operating environments — Windows, Linux, HP-UX,
IBM z/OS, AIX, and so on. The annual maintenance costs
These materials are the copyright of John Wiley & Sons, Inc. and any
dissemination, distribution, or unauthorized use is strictly prohibited.
Chapter 3: Diving into Cloud Economics 31
for the application operating environment need to be
known and calculated as a part of the overall costs.
✓ Support personnel costs: What are your costs for staff
support for day-to-day operations and management of
this application? Will some of these costs be transferred
to the cloud provider? Your own personnel will still be
required to manage and monitor your cloud services in
concert with your data center services.
✓ Infrastructure software costs: A whole set of infrastructure
management software is in use in any installation in the
data center and in a hybrid environment. Needless to say,
associated costs are involved. For example, management
software is typically used across a variety of data center
applications and services. It is typically difficult to separate
costs that may be applied to a hybrid cloud environment.

Understanding Workloads
Because computing requirements are varied, so too are the
workloads. Whether you’re using an IaaS for infrastructure
or you’re developing SaaS applications using a PaaS, here are
some of the kinds of workloads you’re likely to find in a cloud
✓ Batch workload: These workloads operate in the
background and are rarely time sensitive. Batch workloads
typically involve processing large volumes of data on a
predictable schedule (for example, daily, monthly, and
✓ Database workload: These are the most common type
of workload, and they affect almost every environment
in the data center and the cloud. A database workload
must be tuned and managed to support the service that
is using the data. A database workload tends to use a lot
of Input/Output (I/O) cycles.
✓ Analytic workload: Organizations may want to use
analytic services in a cloud environment to make sense
of the vast amounts of data across a complex hybrid
environment. In an analytics workload, the emphasis is
on the ability to holistically analyze the data embedded in
these workloads across public websites, private clouds,
and the data warehouse. A social media analytics workload
is a good example of this. These kinds of workloads tend
to require real-time capabilities.
✓ Transactional workload: These are the automation of
business processes such as billing and order processing.
Traditionally, transactional workloads were restricted
to a single system. However, with the increasing use of
electronic commerce that reaches across partners and
suppliers, transactional workloads must be managed
across various partners’ computing environments. These
workloads are both compute and storage intensive.
Depending on the cost-benefit analysis, it’s likely that
complex transactional workloads are best suited to a
private cloud.
These materials are the copyright of John Wiley & Sons, Inc. and any
dissemination, distribution, or unauthorized use is strictly prohibited.
Chapter 4: Managing Cloud Workloads and Services 37
✓ Test/development workloads: Many organizations
leverage the cloud as a platform for testing and
development workloads. Using cloud services can make
the process of creating and then testing applications
much more cost effective and efficient. In this way,
developers have access to a set of common confirmations
and development tools. Testing can be accomplished in a
more efficient way within a cloud environment.

Day 21 Date: 8/1/2018 Wednesday 9:36 AM

Immidietly returning from Nepal i couldn’t devote long time in cloud as work schedule was very tight and my dream to pursue this cloud certificate grew even bigger. Now i have managed my time so i will devote time regularly for study.

Private Cloud services are owned and operated on-site by you and
your company, operating as a single enterprise.

Public Cloud services are provided by third party vendors. They may
be multi-tenant or dedicated to you as a single company. Multi-tenant
means that your company shares the solution with other organizations
– the data is kept separate and secure.

Conduct a readiness assessment for your organization
A readiness assessment will let you know which areas of your business are ready and developed enough
to warrant a cloud computing solution. This will also show which areas of your business need additional
work to get them ready. This step is important as it safeguards you from rushing in to a solution for which
you are not prepared or which can have a negative effect on your business.
When assessing whether you are ready or not to move to cloud you should ask yourself the following
1. Do I currently use in-house enterprise applications (e.g. HR, Accounting or even email)?
2. Do I have to dedicate time of my staff to maintaining these applications?
3. Do I currently use applications “in the cloud” (such as Gmail, Dropbox, Skype, Webex etc) without
thinking I am using cloud computing?
4. Do I have to invest in hardware, software licenses, etc to run these applications and other needs
(databases etc)?
5. Do I believe that the performance and reliability of my IT systems (software, hardware) could be
6. Does my company have issues in keeping up with changes in technology including software
upgrades, etc?
7. Do my applications and data have different levels of privacy, sensitivity and mission-criticality?

If you answered yes to more than 5 of the questions above, it is very likely that your organization is ready
to move at least partially to the cloud.Any serious analysis of cloud computing must address the advantages anddisadvantages offered by this burgeoning technology. What’s good—and what’s bad—about cloud computing? Let’s take a look.Advantage as below,We’ll start with the advantages offered by cloud computing—and there are many.1) Lower-Cost Computers for Users2) Improved Performance3) Lower IT Infrastructure Costs4) Fewer Maintenance Issues5) Lower Software Costs6) Instant Software Updates7) Increased Computing Power8) Unlimited Storage Capacity9) Increased Data Safety10) Improved Compatibility Between Operating Systems11) Improved Document Format Compatibility12) Easier Group Collaboration13) Universal Access to Documents14) Latest Version Availability15) Removes the Tether to Specific DevicesDisadvantage as below,That’s not to say, of course, that cloud computing is without its disadvantages. Thereare a number of reasons why you might not want to adopt cloud computing for yourparticular needs. Let’s examine a few of the risks related to cloud computing.1) Requires a Constant Internet Connection2) Doesn’t Work Well with Low-Speed Connections3) Can Be Slow4) Features Might Be Limited5) Stored Data Might Not Be Secure6) If the Cloud Loses Your Data, You’re Screwed5) Who Benefits from Cloud Computing?Let’s face it, cloud computing isn’t for everyone. What types of users, then, are bestsuited for cloud computing—and which aren’t?1) Collaborators2) Road Warriors3) Cost-Conscious Users4) Cost-Conscious IT Departments5) Users with Increasing NeedsCloud Computing for Everyone?Now that you know a little bit about how cloud computing works, let’s look at how youcan make cloud computing work for you. By that I mean real-world examples of howtypical users can take advantage of the collaborative features inherent in web-basedapplications.We’ll start our real-world tour of cloud computing by examining how an averagefamily can use web-based applications for various purposes. As you’ll see, computingin the cloud can help a family communicate and collaborate—and bring familymembers closer together.I-Cloud Computing for the Family1) Centralizing Email Communications2) Collaborating on Schedules3) Collaborating on Grocery Lists4) Collaborating on To-Do Lists5) Collaborating on Household Budgets6) Collaborating on Contact Lists7) Collaborating on School Projects8) Sharing Family PhotosII-Cloud Computing for the Community1) Communicating Across the Community2) Collaborating on Schedules3) Collaborating on Group Projects and EventsIII– Cloud Computing for the Corporation1) Managing Schedules2) Managing Contact Lists3) Managing Project4) Collaborating on Reports5) Collaborating on Marketing Materials6) Collaborating on Expense Reports.7) Collaborating on Budgets8) Collaborating on Financial Statements9) Collaborating on PresentationDay 22 Date: 8/2/2018 Thursday 12:18 AMI get up late today because of late night film festible in room but now started again. Reviewed upto Day 6Day 23 Date: 3rd Aug 2018 Friday 12:23 AM


The two most popular storage system technologies are file level storage and block level storage. File level storage is seen and deployed in Network Attached Storage (NAS) systems. Block level storage is seen and deployed in Storage Area Network (SAN) storage. In the article below, we will explain the major differences between file level storage vs. block level storage. File Level Storage – This storage technology is most commonly used for storage systems, which is found in hard drives, NAS systems and so on. In this File Level storage, the storage disk is configured with a protocol such as NFS or SMB/CIFS and the files are stored and accessed from it in bulk.

  • The File level storage is simple to use and implement.
  • It stores files and folders and the visibility is the same to the clients accessing and to the system which stores it.
  • This level storage is inexpensive to be maintained, when it is compared to its counterpart i.e. block level storage.
  • Network attached storage systems usually depend on this file level storage.
  • File level storage can handle access control, integrate integration with corporate directories; and so on.
  • “Scale Out NAS” is a type of File level storage that incorporates a distributed file system that can scale a single volume with a single namespace across many nodes. Scale Out NAS File level storage solutions can scale up to several petabytes all while handling thousands of clients. As capacity is scaled out, performance is scaled up.
  • Click here to view StoneFly products featuring File Level Storage.

Block Level Storage – In this block level storage, raw volumes of storage are created and each block can be controlled as an individual hard drive. These Blocks are controlled by server based operating systems and each block can be individually formatted with the required file system.

  • Block level storage is usually deployed in SAN or storage area network environment.
  • This level of storage offers boot-up of systems which are connected to them.
  • Block level storage can be used to store files and can work as storage for special applications like databases, Virtual machine file systems and so on.
  • Block level storage data transportation is much efficient and reliable.
  • Block level storage supports individual formatting of file systems like NFS, NTFS or SMB (Windows) or VMFS (VMware) which are required by the applications.
  • Each storage volume can be treated as an independent disk drive and it can be controlled by external server operating system.
  • Block level storage uses iSCSI and FCoE protocols for data transfer as SCSI commands act as communication interface in between the initiator and the target. – Directly attached storageHBA – Host bus adaptorSAN – Storage area networkNAS – Network Attached storage – LAN onlyThree main things in cloud computing are1. Storage2. Database Hosting and3. Application HostingDay 24 : 9th Aug 2018 Friday 10:23 AMAfter going through review process i plan to do all the courses in linkedin related to cloud computing in my spare time.


Day 25 : 13th Aug 2018 Monday 11:04 AMAfter weekend of work and Personal branding weekend participation yesterday. I feel more energetic to start learning again.

first look

Day 26 : 21st Aug 2018 Tuesday 9:23 AMThe National Institute of Standards and Technology (NIST), which is the official US-based standards and technology definitions body, has identified five essential characteristics that are part of a cloud computing solution, including:

  • On-demand self-service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Measured service

Public clouds are the most common way of deploying cloud computing. The cloud resources (such as servers and storage) are owned and operated by third-party cloud service providers and delivered over the Internet. Microsoft Azure is an example of a public cloud. With a public cloud, all hardware, software, and other supporting infrastructure is owned and managed by the cloud provider.

In a public cloud, organizations share the same hardware, storage, and network devices with other organizations. Organizations that use public clouds are known as cloud “tenants.” They access services and manage accounts using a web browser. Public cloud deployments are frequently used to provide web-based email, online office applications, storage, virtual machines, databases for production, and testing and development environments.

The advantages of public clouds include:

  • Lower costs There is no need to purchase hardware or software, and tenants pay only for the service they use.
  • No maintenance. The service provider provides the maintenance.
  • Near-unlimited scalability. On-demand resources are available to meet the tenant’s business needs.
  • High reliability. A vast network of servers ensures against failure

A private cloud consists of computing resources used exclusively by one organization. The private cloud can be physically located at your organization’s on-site datacenter, or it can be hosted by a third-party service provider.

In a private cloud, the services and infrastructure are always maintained on a private network, and the hardware and software are dedicated solely to that organization. In this way, a private cloud can make it easier for an organization to customize its resources to meet specific IT requirements. Private clouds are often used by government agencies, financial institutions, any other mid- to large-size organizations with business-critical operations seeking enhanced control over their environment.

The advantages of private clouds include:

  • More flexibility. The organization can customize its cloud environment to meet specific business needs.
  • Improved security. Resources are not shared with others, so higher levels of control and security are possible.
  • High scalability. Private clouds still afford the scalability and efficiency of a public cloud.

Often called “the best of both worlds,” hybrid clouds combine public clouds with on-premises infrastructure, or private clouds, so organizations can reap the advantages of both. In a hybrid cloud, data and applications can move between private and public clouds for greater flexibility and more deployment options. For instance, you can use the public cloud for high-volume, lower-security needs such as web-based email, and the private cloud (or other on-premises infrastructure) for sensitive, business-critical operations such as financial reporting.

A hybrid cloud also offers the ability to employ “cloud bursting”. With this option, an application or resource can run in the private cloud until there is a spike in demand (such as seasonal event like online shopping or tax filing), at which point the organization can “burst through” to the public cloud to tap into additional computing resources.

The advantages of hybrid clouds include:

  • Control. Organizations can maintain a private infrastructure for sensitive assets.
  • Flexibility. Tenants can take advantage of additional resources in the public cloud when they are needed.
  • Cost-effectiveness. With the ability to scale to the public cloud, tenants pay for extra computing power only when needed.
  • Ease-of-use. Transitioning to the cloud doesn’t have to be overwhelming because tenants can migrate gradually, phasing in workloads over time.
Graphic shows cloud with three graphics below to represent SaaS, PaaS, and IaaS, respectively. the graphics below are a laptop, database, and 2 individual servers.


SaaS offerings consist of fully-formed software applications that are delivered as cloud-based services. Users can subscribe to the service and use the application, normally through a web browser or by installing a client-side application. Examples of Microsoft SaaS services include Microsoft Office 365™, Skype®, and Microsoft Dynamics CRM Online. The primary advantage of SaaS services is that they enable users to easily access applications without the need to install and maintain them. Typically, users do not have to worry about issues such as updating applications and maintaining compliance because the service provider handles them.


PaaS offerings consist of cloud-based services that provide resources on which developers can build their own solutions. PaaS typically encapsulates fundamental operating system (OS) capabilities, including storage and compute capacity, in addition to functional services for custom applications. Usually PaaS offerings provide application programming interfaces (APIs), in addition to configuration and management user interfaces. Azure provides PaaS services that simplify the creation of solutions such as web and mobile applications. PaaS enables developers and organizations to create highly scalable custom applications without having to provision and maintain hardware and operating system resources. Examples of PaaS include Azure Websites and Azure Cloud Services, which can run a web application that your developer team creates.


IaaS offerings provide virtualized server, network, and storage infrastructure components that can be easily provisioned and decommissioned as required. Typically, IaaS facilities are managed in a similar way to on-premises infrastructure and provide an easy migration path for moving existing applications to the cloud.

A key point to note is that an infrastructure service might be a single IT resource—such as a virtual server that has a default installation of Windows Server and Microsoft SQL Server—or it might be a completely preconfigured infrastructure environment for a specific application or business process. For example, a retail organization might empower departments to provision their own database servers to use as data stores for custom applications. Alternatively, the organization might define a set of virtual machine and network templates that can be provisioned as a single unit to implement a complete, preconfigured infrastructure solution for a branch or store, including all the required applications and settings.

When designing an application, four basic options are available for hosting the SQL Server part of the application:

  • SQL Server on non-virtualized physical machines
  • SQL Server in on-premises virtualized machines (private cloud)
  • SQL Server in Azure Virtual Machine (Microsoft public cloud)
  • Azure SQL Database (Microsoft public cloud)
Cloud SQL Server options: SQL server on IaaS, or SaaS SQL database in the cloud.


  • Language AI. Used to help apps understand user commands, fix spelling and grammar errors, recognize context of text for better search results, translate text into another language, and make suggestions for what to type next as a user is typing.
  • Speech AI. Used in real-time speech translation, identifying the distinct voices of different users, and helping to resolve speech barriers such as speech impediments, thick accents, and background noise. It can also convert speech to text and text to speech.
  • Vision AI. Used to help apps identify and tag people’s faces in photographs. With vision AI apps can also recognize emotions, automatically moderate content, and index images and videos so they don’t have to be hand-tagged to appear in search results.
  • Knowledge AI. Used to recommend items to customers, convert complex information into simple answers, and help make interactive search more natural. Knowledge AI can also integrate academic information into apps and make simple decisions on its own. It learns through experience, so it gets smarter and more precise over time.
  • Search AI. Used for more precise and accurate results when searching news articles, images, videos, websites, and documents. Provides intelligent autosuggest options. Search AI is integrated with the Bing search engine.

Day 27 : 22nd Aug 2018 Wednesday 9:30 AM


Day 28 : 23rd Aug 2018 Wednesday 9:30 AMNow from today , i am particularly focusing on GOOGLE only as i am preparing for google certification. Internet consists of so many cloud and devops topic that it is nearly impossible to know it all. So, lets focus in google only.

GCP services

Day 29 : 27th Aug 2018 Monday 11:30 AMGCP PRACTICE1. Creating a Virtual Machine with Windows XP in it and access it remotely.First of all we need to create a project, everything in GCP starts with a project and all the billings are based upon the resources used by the project.So, lets create a new project with unique name.


After creating the project, the second task is to create a Virtual machine. Virtual machines or VM or VM instances are new infrastructure in cloud where you can implement whatever server or system you want to build. For this just goto Compute engine>VM instances.


When you select VM instances you will be prompted to select which project you would like to create VM instances for.


Go ahead and select Create even though you can import VM instances from other places.


Now you will be presented with detail configuration option for your new VM instance.Name should be unique throughout cloud.In region select where you want to deploy.In machine type select the one that will match the requirement of OS you will be using.Shared memeory are cheap but slow.dedicated memory are costly but fast.You can also select boot disk from your own or from googles own repository. Many linux OS’s are available but limited versions of windows.Also don’t forget to select access type as HTTP and HTTPS as you need this if you are planning to access server from browser or have web server acess.As of now we are just implementing windows machine remotely, No need to focus on this seetings.


After you select create it will take some seconds before it says ready. You will be assigned both internal and external IP so that you can access between the cloud and from outside the cloud.


Remenber to reset the password for Windows machine and copy it to notepad.


Now open your windows remote desktop connection


Use the IP provided in previous screen to login to the server.


Just accept if you are prompted with security certificate.


“BOOM”, you are on the cloud now.


Prashanta Paudel


First of all we need to determine whether we are going to build web server in linux or windows.In linux – we can build apache web server.In windows – we can build IIS web server.So, lets create a sample project.


For our test we are going with linux as it is the most widely used web server.


All other configurations are as in previous setup except we have chosen debian distribution for this purpose.


After the VM instance is ready we can now access the new instance from our web browser by clicking the SSH button shown below.


Now you need to install apache.

  1. Use the Debian package manager to install the apache2 package. sudo apt-get update && sudo apt-get install apache2 -y
  2. Overwrite the Apache web server default web page with the following command: echo '<!doctype html><html><body><h1>Hello World!</h1></body></html>'

The full log of the events is below:

Connected, host fingerprint: ssh-rsa 2048 24:FC:11:ED:D1:53:7D:E7:BD:E8:57:5B:EF:FC:B3:1A:C4:FF:31:7F:A6:8F:79:A0:E6:45:D1:2C:8F:6D:20:2C
Linux instance-1 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u3 (2018-08-19) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
praspaudel@instance-1:~$ sudo apt-get update && sudo apt-get install apache2 -y
Ign:1 stretch InRelease
Get:2 stretch/updates InRelease [94.3 kB]
Get:3 stretch-updates InRelease [91.0 kB]
Get:4 stretch-backports InRelease [91.8 kB]
Hit:5 stretch Release
Get:6 cloud-sdk-stretch InRelease [6,377 B]
Get:7 stretch/updates/main Sources [164 kB]
Get:8 stretch/updates/main amd64 Packages [391 kB]
Get:9 stretch/updates/main Translation-en [186 kB]
Hit:10 google-compute-engine-stretch-stable InRelease
Get:11 stretch-backports/main Sources.diff/Index [27.8 kB]
Get:12 stretch-backports/main amd64 Packages.diff/Index [27.8 kB]
Get:13 stretch-backports/main Translation-en.diff/Index [27.8 kB]
Get:14 stretch-backports/main Sources 2018-08-20-2007.21.pdiff [2,859 B]
Get:15 stretch-backports/main Sources 2018-08-21-1407.05.pdiff [11.8 kB]
Get:16 stretch-backports/main Sources 2018-08-21-2007.34.pdiff [1,188 B]
Get:17 stretch-backports/main Sources 2018-08-22-0215.58.pdiff [29 B]
Get:18 stretch-backports/main Sources 2018-08-22-0810.49.pdiff [31 B]
Get:19 stretch-backports/main Sources 2018-08-22-1409.51.pdiff [1,378 B]
Get:20 stretch-backports/main Sources 2018-08-22-2018.44.pdiff [1,570 B]
Get:21 stretch-backports/main Sources 2018-08-23-0229.35.pdiff [54 B]
Get:22 stretch-backports/main Sources 2018-08-23-0812.54.pdiff [31 B]
Get:23 stretch-backports/main Sources 2018-08-24-0207.06.pdiff [1,790 B]
Get:24 stretch-backports/main Sources 2018-08-24-0808.44.pdiff [726 B]
Get:25 stretch-backports/main Sources 2018-08-24-1444.41.pdiff [566 B]
Get:26 stretch-backports/main Sources 2018-08-24-2007.09.pdiff [2,206 B]
Get:27 stretch-backports/main Sources 2018-08-25-0208.21.pdiff [31 B]
Get:28 stretch-backports/main Sources 2018-08-25-0811.11.pdiff [447 B]
Get:29 stretch-backports/main Sources 2018-08-25-1408.24.pdiff [963 B]
Get:30 stretch-backports/main Sources 2018-08-25-2007.22.pdiff [412 B]
Hit:31 google-cloud-packages-archive-keyring-stretch InRelease
Get:33 stretch-backports/main Sources 2018-08-26-1408.46.pdiff [1,328 B]
Get:34 stretch-backports/main Sources 2018-08-26-2009.36.pdiff [2,426 B]
Get:35 stretch-backports/main Sources 2018-08-27-0207.51.pdiff [33 B]
Get:36 stretch-backports/main Sources 2018-08-27-0807.58.pdiff [61 B]
Get:36 stretch-backports/main Sources 2018-08-27-0807.58.pdiff [61 B]
Get:37 stretch-backports/main amd64 Packages 2018-08-20-2007.21.pdiff [560 B]
Get:38 stretch-backports/main amd64 Packages 2018-08-21-0215.33.pdiff [1,549 B]
Get:39 stretch-backports/main amd64 Packages 2018-08-21-1407.05.pdiff [17.3 kB]
Get:40 stretch-backports/main amd64 Packages 2018-08-22-0215.58.pdiff [328 B]
Get:41 stretch-backports/main amd64 Packages 2018-08-22-1409.51.pdiff [753 B]
Get:42 stretch-backports/main amd64 Packages 2018-08-22-2018.44.pdiff [1,050 B]
Get:43 stretch-backports/main amd64 Packages 2018-08-23-0229.35.pdiff [517 B]
Get:44 stretch-backports/main amd64 Packages 2018-08-24-0808.44.pdiff [675 B]
Get:45 stretch-backports/main amd64 Packages 2018-08-24-1444.41.pdiff [3,144 B]
Get:46 cloud-sdk-stretch/main amd64 Packages [45.5 kB]
Get:47 stretch-backports/main amd64 Packages 2018-08-25-0208.21.pdiff [1,629 B]
Get:48 stretch-backports/main amd64 Packages 2018-08-25-0811.11.pdiff [222 B]
Get:49 stretch-backports/main amd64 Packages 2018-08-25-2007.22.pdiff [910 B]
Get:50 stretch-backports/main amd64 Packages 2018-08-26-2009.36.pdiff [372 B]
Get:51 stretch-backports/main amd64 Packages 2018-08-27-0207.51.pdiff [225 B]
Get:52 stretch-backports/main amd64 Packages 2018-08-27-0807.58.pdiff [3,514 B]
Get:53 stretch-backports/main Translation-en 2018-08-20-2007.21.pdiff [174 B]
Get:54 stretch-backports/main Translation-en 2018-08-21-1407.05.pdiff [8,730 B]
Get:55 stretch-backports/main Translation-en 2018-08-24-0808.44.pdiff [160 B]
Get:56 stretch-backports/main Translation-en 2018-08-24-1444.41.pdiff [31 B]
Get:52 stretch-backports/main amd64 Packages 2018-08-27-0807.58.pdiff [3,514 B]
Get:57 stretch-backports/main Translation-en 2018-08-25-0208.21.pdiff [382 B]
Get:57 stretch-backports/main Translation-en 2018-08-25-0208.21.pdiff [382 B]
Fetched 1,226 kB in 1s (870 kB/s)
Reading package lists… Done
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following additional packages will be installed:
apache2-bin apache2-data apache2-utils libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libicu57 liblua5.2-0 libperl5.24 libxml2 perl
perl-modules-5.24 rename sgml-base ssl-cert xml-core
Suggested packages:
www-browser apache2-doc apache2-suexec-pristine | apache2-suexec-custom perl-doc libterm-readline-gnu-perl | libterm-readline-perl-perl make sgml-base-doc
openssl-blacklist debhelper
The following NEW packages will be installed:
apache2 apache2-bin apache2-data apache2-utils libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libicu57 liblua5.2-0 libperl5.24 libxml2 perl
perl-modules-5.24 rename sgml-base ssl-cert xml-core
0 upgraded, 18 newly installed, 0 to remove and 5 not upgraded.
Need to get 17.3 MB of archives.
After this operation, 80.5 MB of additional disk space will be used.
Get:1 stretch/main amd64 perl-modules-5.24 all 5.24.1-3+deb9u4 [2,724 kB]
Get:2 stretch/main amd64 libperl5.24 amd64 5.24.1-3+deb9u4 [3,522 kB]
Get:3 stretch/main amd64 perl amd64 5.24.1-3+deb9u4 [218 kB]
Get:4 stretch/main amd64 libapr1 amd64 1.5.2-5 [96.6 kB]
Get:5 stretch/main amd64 libaprutil1 amd64 1.5.4-3 [85.8 kB]
Get:6 stretch/main amd64 libaprutil1-dbd-sqlite3 amd64 1.5.4-3 [19.3 kB]
Get:7 stretch/main amd64 libaprutil1-ldap amd64 1.5.4-3 [17.4 kB]
Get:8 stretch/main amd64 liblua5.2-0 amd64 5.2.4-1.1+b2 [110 kB]
Get:9 stretch/main amd64 libicu57 amd64 57.1-6+deb9u2 [7,700 kB]
Get:10 stretch/main amd64 libxml2 amd64 2.9.4+dfsg1-2.2+deb9u2 [920 kB]
Get:11 stretch/main amd64 apache2-bin amd64 2.4.25-3+deb9u5 [1,186 kB]
Get:12 stretch/main amd64 apache2-utils amd64 2.4.25-3+deb9u5 [217 kB]
Get:13 stretch/main amd64 apache2-data all 2.4.25-3+deb9u5 [162 kB]
Get:14 stretch/main amd64 apache2 amd64 2.4.25-3+deb9u5 [236 kB]
Get:15 stretch/main amd64 sgml-base all 1.29 [14.8 kB]
Get:16 stretch/main amd64 rename all 0.20-4 [12.5 kB]
Get:17 stretch/main amd64 ssl-cert all 1.0.39 [20.8 kB]
Get:18 stretch/main amd64 xml-core all 0.17 [23.2 kB]
Fetched 17.3 MB in 0s (66.0 MB/s)
Preconfiguring packages …
Selecting previously unselected package perl-modules-5.24.
(Reading database … 33394 files and directories currently installed.)
Preparing to unpack …/00-perl-modules-5.24_5.24.1-3+deb9u4_all.deb …
Unpacking perl-modules-5.24 (5.24.1-3+deb9u4) …
Selecting previously unselected package libperl5.24:amd64.
Preparing to unpack …/01-libperl5.24_5.24.1-3+deb9u4_amd64.deb …
Unpacking libperl5.24:amd64 (5.24.1-3+deb9u4) …
Selecting previously unselected package perl.
Preparing to unpack …/02-perl_5.24.1-3+deb9u4_amd64.deb …
Unpacking perl (5.24.1-3+deb9u4) …
Selecting previously unselected package libapr1:amd64.
Preparing to unpack …/03-libapr1_1.5.2-5_amd64.deb …
Unpacking libapr1:amd64 (1.5.2-5) …
Selecting previously unselected package libaprutil1:amd64.
Preparing to unpack …/04-libaprutil1_1.5.4-3_amd64.deb …
Unpacking libaprutil1:amd64 (1.5.4-3) …
Selecting previously unselected package libaprutil1-dbd-sqlite3:amd64.
Preparing to unpack …/05-libaprutil1-dbd-sqlite3_1.5.4-3_amd64.deb …
Unpacking libaprutil1-dbd-sqlite3:amd64 (1.5.4-3) …
Selecting previously unselected package libaprutil1-ldap:amd64.
Preparing to unpack …/06-libaprutil1-ldap_1.5.4-3_amd64.deb …
Unpacking libaprutil1-ldap:amd64 (1.5.4-3) …
Selecting previously unselected package liblua5.2-0:amd64.
Preparing to unpack …/07-liblua5.2-0_5.2.4-1.1+b2_amd64.deb …
Unpacking liblua5.2-0:amd64 (5.2.4-1.1+b2) …
Selecting previously unselected package libicu57:amd64.
Preparing to unpack …/08-libicu57_57.1-6+deb9u2_amd64.deb …
Unpacking libicu57:amd64 (57.1-6+deb9u2) …
Selecting previously unselected package libxml2:amd64.
Preparing to unpack …/09-libxml2_2.9.4+dfsg1-2.2+deb9u2_amd64.deb …
Unpacking libxml2:amd64 (2.9.4+dfsg1-2.2+deb9u2) …
Selecting previously unselected package apache2-bin.
Preparing to unpack …/10-apache2-bin_2.4.25-3+deb9u5_amd64.deb …
Unpacking apache2-bin (2.4.25-3+deb9u5) …
Selecting previously unselected package apache2-utils.
Preparing to unpack …/11-apache2-utils_2.4.25-3+deb9u5_amd64.deb …
Unpacking apache2-utils (2.4.25-3+deb9u5) …
Selecting previously unselected package apache2-data.
Preparing to unpack …/12-apache2-data_2.4.25-3+deb9u5_all.deb …
Unpacking apache2-data (2.4.25-3+deb9u5) …
Selecting previously unselected package apache2.
Preparing to unpack …/13-apache2_2.4.25-3+deb9u5_amd64.deb …
Unpacking apache2 (2.4.25-3+deb9u5) …
Selecting previously unselected package sgml-base.
Preparing to unpack …/14-sgml-base_1.29_all.deb …
Unpacking sgml-base (1.29) …
Selecting previously unselected package rename.
Preparing to unpack …/15-rename_0.20-4_all.deb …
Unpacking rename (0.20-4) …
Selecting previously unselected package ssl-cert.
Preparing to unpack …/16-ssl-cert_1.0.39_all.deb …
Unpacking ssl-cert (1.0.39) …
Selecting previously unselected package xml-core.
Preparing to unpack …/17-xml-core_0.17_all.deb …
Unpacking xml-core (0.17) …
Setting up libapr1:amd64 (1.5.2-5) …
Setting up perl-modules-5.24 (5.24.1-3+deb9u4) …
Setting up libperl5.24:amd64 (5.24.1-3+deb9u4) …
Setting up apache2-data (2.4.25-3+deb9u5) …
Setting up ssl-cert (1.0.39) …
Setting up sgml-base (1.29) …
Setting up libicu57:amd64 (57.1-6+deb9u2) …
Setting up libxml2:amd64 (2.9.4+dfsg1-2.2+deb9u2) …
Setting up perl (5.24.1-3+deb9u4) …
update-alternatives: using /usr/bin/prename to provide /usr/bin/rename (rename) in auto mode
Processing triggers for libc-bin (2.24-11+deb9u3) …
Setting up libaprutil1:amd64 (1.5.4-3) …
Processing triggers for systemd (232-25+deb9u4) …
Processing triggers for man-db ( …
Setting up liblua5.2-0:amd64 (5.2.4-1.1+b2) …
Setting up xml-core (0.17) …
Setting up libaprutil1-ldap:amd64 (1.5.4-3) …
Setting up libaprutil1-dbd-sqlite3:amd64 (1.5.4-3) …
Setting up rename (0.20-4) …
update-alternatives: using /usr/bin/file-rename to provide /usr/bin/rename (rename) in auto mode
Setting up apache2-utils (2.4.25-3+deb9u5) …
Setting up apache2-bin (2.4.25-3+deb9u5) …
Setting up apache2 (2.4.25-3+deb9u5) …
Enabling module mpm_event.
Enabling module authz_core.
Enabling module authz_host.
Enabling module authn_core.
Enabling module auth_basic.
Enabling module access_compat.
Enabling module authn_file.
Enabling module authz_user.
Enabling module alias.
Enabling module dir.
Enabling module autoindex.
Enabling module env.
Enabling module mime.
Enabling module negotiation.
Enabling module setenvif.
Enabling module filter.
Enabling module deflate.
Enabling module status.
Enabling module reqtimeout.
Enabling conf charset.
Enabling conf localized-error-pages.
Enabling conf other-vhosts-access-log.
Enabling conf security.
Enabling conf serve-cgi-bin.
Enabling site 000-default.
Created symlink /etc/systemd/system/ → /lib/systemd/system/apache2.service.
Created symlink /etc/systemd/system/ → /lib/systemd/system/apache-htcacheclean.service.
Processing triggers for libc-bin (2.24-11+deb9u3) …
Processing triggers for sgml-base (1.29) …
Processing triggers for systemd (232-25+deb9u4) …
praspaudel@instance-1:~$ echo ‘<!doctype html><html><body><h1>Hello World!</h1></body></html>’ | sudo tee /var/www/html/index.html
<!doctype html><html><body><h1>Hello World!</h1></body></html>
praspaudel@instance-1:~$ ecting previously unselected package rename.
-bash: ecting: command not found
praspaudel@instance-1:~$ Preparing to unpack …/15-rename_0.20-4_all.deb …
-bash: Preparing: command not found
praspaudel@instance-1:~$ Unpacking rename (0.20-4) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Selecting previously unselected package ssl-cert.
-bash: Selecting: command not found
praspaudel@instance-1:~$ Preparing to unpack …/16-ssl-cert_1.0.39_all.deb …
-bash: Preparing: command not found
praspaudel@instance-1:~$ Unpacking ssl-cert (1.0.39) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Selecting previously unselected package xml-core.
-bash: Selecting: command not found
praspaudel@instance-1:~$ Preparing to unpack …/17-xml-core_0.17_all.deb …
-bash: Preparing: command not found
praspaudel@instance-1:~$ Unpacking xml-core (0.17) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up libapr1:amd64 (1.5.2-5) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up perl-modules-5.24 (5.24.1-3+deb9u4) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up libperl5.24:amd64 (5.24.1-3+deb9u4) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up apache2-data (2.4.25-3+deb9u5) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up ssl-cert (1.0.39) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up sgml-base (1.29) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up libicu57:amd64 (57.1-6+deb9u2) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up libxml2:amd64 (2.9.4+dfsg1-2.2+deb9u2) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up perl (5.24.1-3+deb9u4) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ update-alternatives: using /usr/bin/prename to provide /usr/bin/rename (rename) in auto mode
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Processing triggers for libc-bin (2.24-11+deb9u3) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up libaprutil1:amd64 (1.5.4-3) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Processing triggers for systemd (232-25+deb9u4) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Processing triggers for man-db ( …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up liblua5.2-0:amd64 (5.2.4-1.1+b2) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up xml-core (0.17) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up libaprutil1-ldap:amd64 (1.5.4-3) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up libaprutil1-dbd-sqlite3:amd64 (1.5.4-3) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up rename (0.20-4) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ update-alternatives: using /usr/bin/file-rename to provide /usr/bin/rename (rename) in auto mode
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up apache2-utils (2.4.25-3+deb9u5) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up apache2-bin (2.4.25-3+deb9u5) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Setting up apache2 (2.4.25-3+deb9u5) …
-bash: syntax error near unexpected token `(‘
praspaudel@instance-1:~$ Enabling module mpm_event.
-bash: Enabling: command not found
praspaudel@instance-1:~$ Enabling module authz_core.
-bash: Enabling: command not found
praspaudel@instance-1:~$ praspaudel@instance-1:~$

After finishing the setup , find the correct public IP for the web server from instance and put it in the browser. at this time you can only use Http.

“READY” , basic website is done.


Prashanta Paudel Day 30 : 28th Aug 2018 Monday 9:00 AMToday i will be continuing my linkedin certifications


GOOGLE CLOUD API PRACTICE :Building a web app that allows you to upload a image and apply google API to label it in same page.What we should consider building this system1. cheap2. fast3. less resource4. redundant


I built a file server with basic upload and listing option.Day 31 : 5th September 2018 wednesday 11:00 AMToday i want to complelete one certificationa and start a mini project.


Now i want to move to practical aspects of google cloud platform. So, lets do some labs

scrn Creating a virtual machine : every project in practice requires some kind of server as a base . As we are dealing with cloud , basis of every project is virtual machine with some kind of server running in it. Creating VM is first task in every project.



Google Cloud Self-Paced Labs


Google Compute Engine lets you create virtual machines running different operating systems, including multiple flavors of Linux (Debian, Ubuntu, Suse, Red Hat, CoreOS) and Windows Server, on Google infrastructure. You can run thousands of virtual CPUs on a system that has been designed to be fast and to offer strong consistency of performance.

In this hands-on lab you’ll learn how to create virtual machine instances of various machine types using the Google Cloud Platform (GCP) Console and using the gcloud command line. You’ll also learn how to connect an NGINX web server to your virtual machine.

Although you can easily copy and paste commands from the lab to the appropriate place, students should type the commands themselves to reinforce their understanding of the core concepts


  • Create a virtual machine with the Google Cloud Platform Console
  • Create a virtual machine with gcloud command line
  • Deploy a web server and connect it to a virtual machine
    LEARNING LINUX COMMAND LINE– Working with a text-based command line environment, without the graphical user interface, the windows and buttons we’re all familiar with, can be intimidating at first. But once you start to understand how the command line environment works, you’ll see how powerful and efficient it can be. I’m Scott Simpson, and in this course, I’ll introduce you to the basics of working with a Linux command line using the very common shell called Bash.I’ll explain what the command line is and how it’s major parts work. We’ll take a look at working with files and folders, and I’ll explain how Linux protects files from unauthorized access with permissions.Then I’ll show you some common commands you should be familiar with and we’ll see how to connect commands together with pipes. After that, I’ll show you some of the more complex command line tasks you’ll need to be familiar with in the command line environment. This course will give you a foundation of knowledge, working with a widely used Bash shell, in case you choose to extend your learning into user management, network configuration, programming and development, or system administration. 

Day 31 : 6th September 2018 Friday 10:00 AM

Today i attended google cloud platform hamina region starting ceremony. The event was very crowded and few companies were


Day 32 : 7th September 2018 Friday 10:00 AM


Day 32 : 10th September 2018 Monday 10:00 AM


Day 33 : 11th September 2018 Tuesday 10:00 AM



An Associate Cloud Engineer deploys applications, monitors operations, and manages enterprise solutions. This individual is able to use Google Cloud Console and the command-line interface to perform common platform-based tasks to maintain one or more deployed solutions that leverage Google-managed or self-managed services on Google Cloud.

The Associate Cloud Engineer exam assesses your ability to:

  • check Set up a cloud solution environment
  • check Plan and configure a cloud solution
  • check Deploy and implement a cloud solution
  • check Ensure successful operation of a cloud solution
  • check Configure access and security




  • CREATING PROJECTS:Creating a project is a basic of the cloud based activities you need to do to even get started. Creating a project with associated organization name will allow one to start building up cloud infrastructure like creating VM, Database, Big data etc.Creating project is done in following steps:Google Cloud Platform projects form the basis for creating, enabling, and using all GCP services including managing APIs, enabling billing, adding and removing collaborators, and managing permissions for GCP resources.This page explains how to create and manage GCP projects using the Resource Manager API and the Google Cloud Platform Console.CREATING A PROJECTTo create a project, you must have the resourcemanager.projects.create permission. When an organization is created, the entire domain has the Project Creator role, which includes that permission.You can customize few options before the creation of the project like organization name and ID. The owner of the organization cannot be changed later.After the project is created then you can add various resources in it and make it functional.MANAGING PROJECT QUOTASThe number of projects left in your quota can be viewed when creating a new project, as in the Creating a Project steps above. When creating the project, this notification will be displayed with your number of remaining projects:To request additional capacity for projects in your quota, see the Increase support page. More information about quotas and why they are used can be found at the Free Trial Project Quota Requests support page.IDENTIFYING PROJECTSTo interact with GCP resources, you must provide the identifying project information for every request. A project can be identified in the following ways:Project ID: the customized name you chose when you created the project, or when you activated an API that required you to create a project ID. Note that you can’t reuse the project ID of a deleted project.Project number: a number that’s automatically generated by the server and assigned to your project. To get the project ID and the project number:Go to Google Cloud Platform Console.Select your project.Both the project ID and project number are displayed on the project Dashboard Project info card:A project ID is different from a project name. The project name is a human-readable way to identify your projects, but it isn’t used by any Google APIs. In the above example, the project name is My Sample Project and the project ID is my-sample-project-191923.The project ID is generated from the project name you enter when you create the project in the Google Cloud Platform Console. Certain words are restricted from use in project IDs. If you use restricted words in the project name, such as google or ssl, the generated project ID will not include these words. This will not affect the project name.The project number and project ID are unique across Google Cloud Platform. If another user owns a project ID for their project, you won’t be able to use the same project ID.When you choose your project ID (or any resource names), don’t include any sensitive information in your names.GETTING AN EXISTING PROJECTYou can get an existing project using the GCP Console or the projects.get() method.To view a project via the GCP Console:Go to the Google Cloud Platform Console.Click the projects drop-down on the top bar. (The drop-down label will be be name of your project you’re currently viewing).Select the project you wish to view.LISTING PROJECTSYou can list all projects you own using the GCP Console or the projects.list() method.To list projects using the GCP Console:
    • Go to the Google Cloud Platform Console.
    • All your projects are listed in the projectsdrop-down on the top bar. Use the Search projects and folders textbox to filter projects.
    • To list all your projects, click Manage Resources. Use the Filter by name, ID, or labeltextbox to filter your projects.
    UPDATING PROJECTSYou can update projects using the GCP Console or the projects.update() method. Currently the only fields that can be updated are the project name and labels. You cannot change the project ID value that you use with the gcloud command-line tool or API requests. For more information about updating projects, see the project API reference page.To update a project’s field using the GCP Console:
    • Open the Settings page in the Google Cloud Platform Console.
    • Click Select a project.
    • To change the project name, edit Project name, then click Save.
    • To change labels, click Labelson the left nav. Learn more about Using Labels.
    SHUTTING DOWN (DELETING) PROJECTSYou can shut down projects using the GCP Console or the projects.delete() method.Shutting down a project does not delete the project immediately; it only requests deletion of the project. The project is marked for deletion (“soft deleted”), and you will lose access to it immediately, but the project can be recovered for a 30 day period. Until actual deletion of the project, the project will count towards your quota usage.The project owner will receive an email notification that the project has been marked for deletion. During the 30 day period, the owner can recover the project by following the steps to restore a project. After the 30 day period, the project and all the resources under it are deleted and cannot be recovered. While most resources can be recovered during the 30 day period, Cloud Storage begins deleting resources before the 30 day recovery period ends, and these resources may not be fully recoverable.If billing is set up for a project, it might not be completely deleted until the current billing cycle ends and your account is successfully charged. The number and types of services in use can also affect when the system permanently deletes a project.To shut down a project:
    • The project must not have a billing account associated with it.
    • The project must have a lifecycle state of ACTIVE.
    To shut down a project using the GCP Console:
    • Open the Settings page (found under IAM & admin) in the Google Cloud Platform Console.
    • Click Select a project.
    • Select a project you wish to delete, and click Open.
    • Click Shut down.
    • Enter the Project ID, then click Shut down.
    RESTORING A PROJECTProject owners can restore a deleted project within the 30-day recovery period that starts when the project is shut down. Restoring a project returns it to the state it was in before it was shut down. Cloud Storage resources are deleted before the 30-day period ends, and may not be fully recoverable.Some services might need to be restarted manually. For more information, see Restarting Google Cloud Platform Services.To restore a project:
    1. Go to the Manage Resourcespage in the Google Cloud Platform Console.
    1. In the Organizationdrop-down in the upper left, select your organization.
    2. Below the list of projects, click Resources pending deletion.
    3. Check the box for the project you want to restore, then click Restore. In the dialog that appears, confirm that you want to restore the project.




Cloud guru doesn’t have any other couses or learning path suitable for GCP so CLOUD GURU LEARNING FINISHED !


From today, I will start doing one Web project and one GCP project which could be stand alone or incremental.

I plan to do it at the end of day maybe at 4 pm but since today is first day lets do it first.


Create a web server called “mywebsercer” with http and https access and 1 cpu and CentOS.

The first thing any new DevOps engineer would like to do in Google Cloud Platform is to spin up a Linux server in GCP and access it using the remote desktop connection.

The primary difficulty is that the Linux server started in GCP is in CLI mode and thus have no GUI of its own. So, we need to install GUI first using the command

#yum -y groupinstall "GNOME Desktop" && systemctl set-default && shutdown -r now

This will install GUI and make it as a default startup option and then restart the machine.

$ sudo yum install xrdp tigervnc-server

This will install tigervnc server in CentOS. Tigervnc-server is a program which executes an Xvnc server and starts parallel sessions of Gnome or other Desktop Environment on the VNC desktop.

After installation login to the system using the account which you want to grant xrdp access to connect remotely. For instance, if you want root user to have access via remote desktop then change the password for that user as default is blank in default.

# sudo su
# passwd
# systemctl start xrdp

xrdp should be listening on 3389. You can confirm this by using the command.

# netstat -antup | grep xrdp
tcp        0      0  *               LISTEN      10202/xrdp          
tcp        0      0*               LISTEN      10201/xrdp-sesman

xrdp doesn’t start automatically after booting system. So, use the following command after bootup.

# systemctl enable xrdp
Created symlink from /etc/systemd/system/ to /usr/lib/systemd/system/xrdp.service.

Now, we can check the connection from any Windows machine by starting remote desktop and putting the IP address of the Linux server.

After typing IP and pressing connect you will be prompted for another login as follows

After successful login, you will have access to GUI based Linux running on GCP instance!


Let’s start with one online course today !


As explained in my previous articles we need to first create an instance in the compute engine, particularly a VM with LAMP stack this…

Published on Sep 28·3 min read


In this example, I am going to show how to build a simple GCP based application that will take input from the user and convert that into a…

Published on Sep 26·2 min read

I wrote this article in Medium


Day 39: 10th October 2018 Wednesday 9:00 AM

Due to work and uneven  study habits i couldn’t update the post regularly. In the mean time two of my friends got a job after graduation and i have to work morning and evening. In day time i am free but study is not going as i anticipated. I have now removed my evening duty to focus my study. Today I planned to study 12 hours. Lets see How it goes.

IT Support career 1.JPG

Even though my primary focus is in Cloud some of the courses interest me and when i start it takes long time to finish.Maybe that is the reason i am lacking behind.


Now i am learning from projector



without the proper practice of the items in the GCP platform it is nearly impossible to know how to use them. Today we are going to…

Published on Oct 9·3 min read


Note: This process is taken from various sources from the google search. I don’t reserve the right of procedure nor I guarantee that it…

Published on Oct 3·4 min read


GCP Learning Series: connecting MySQL Client from Compute Engine

Often you need to host a database in a cloud for different purposes. It is the most basic thing that needs to be done for any online…

Published 2 minutes ago·3 min read



From this series, I will be trying to go through sections in the GCP Certification and document the steps to carry out the tasks mentioned under the subheading.

Google has mentioned Associate Cloud Engineer as the one who could perform the following tasks


An Associate Cloud Engineer deploys applications, monitors operations, and manages enterprise solutions. This individual is able to use the Google Cloud Console and the command-line interface to perform common platform-based tasks to maintain one or more deployed solutions that leverage Google-managed or self-managed services on Google Cloud.

The Associate Cloud Engineer exam assesses your ability to:

  • Set up a cloud solution environment
  • Plan and configure a cloud solution
  • Deploy and implement a cloud solution
  • Ensure the successful operation of a cloud solution
  • Configure access and security

The whole Certification is divided into 5 sections and other subsections. My plan is to go through each subsection so that learning will be easy and piece by piece.

So, Today we will go through section 1.1 of section 1.

Before going directly into section 1.1 we should have information on GCP resource hierarchy. The purpose of this is to bind the resources with the owner and maintain inheritance of ownership as well as provide access control and policy for the resources.

Generally, we can compare the resource hierarchy with the file system in the traditional OS. This hierarchical organization of resources enables you to set access control policies and configuration settings on a parent resource, and the policies and IAM settings are inherited by the child resources.

Google says :

At the lowest level, resources are the fundamental components that make up all GCP services. Examples of resources include Compute Engine Virtual Machines (VMs), Cloud Pub/Sub topics, Cloud Storage buckets, App Engine instances. All these lower level resources can only be parented by projects, which represent the first grouping mechanism of the GCP resource hierarchy.

G Suite and Cloud Identity customers have access to additional features of the GCP resource hierarchy that provide benefits such as centralized visibility and control, and further grouping mechanisms, such as folders. We have launched the Cloud Identity management tool. For details on how to use Cloud Identity, see Migrating to Cloud Identity.

GCP resources are organized hierarchically. Starting from the bottom of the hierarchy, projects are the first level, and they contain other resources. All resources must belong to exactly one project.

The Organization resource is the root node of the GCP resource hierarchy and all resources that belong to an organization are grouped under the organization node. This provides central visibility and control over every resource that belongs to an organization.

Folders are an additional grouping mechanism on top of projects. You are required to have an Organization resource as a prerequisite to using folders. Folders and projects are all mapped under the Organization resource.

The GCP resource hierarchy, especially in its most complete form which includes an Organization resource and folders, allows companies to map their organization onto GCP and provides logical attach points for access management policies (Cloud Identity and Access Management) andOrganization policies. Both Cloud IAM and Organization policies are inherited through the hierarchy, and the effective policy at each node of the hierarchy is the result of policies directly applied at the node and policies inherited from its ancestors.

The diagram below represents an example GCP resource hierarchy in complete form:

Section 1: Setting up a cloud solution environment

1.1 Setting up cloud projects and accounts. Activities include:

  • Creating projects.
  • Assigning users to pre-defined IAM roles within a project.
  • Linking users to G Suite identities.
  • Enabling APIs within projects.
  • Provisioning one or more Stackdriver accounts.


The most basic thing to do in GCP is creating the project. Creating the project in itself doesn’t complete anything but it will start the process so that you can add entities in the project and build your own network, create a database, write code, build server etc.

The project number and project ID are unique across the Google Cloud Platform. If another user owns a project ID for their project, you won’t be able to use the same project ID. Also, the name such as Google or SSL cannot be used for the project name.

All the instances are attached to the project. So, you got the idea everything is inside the project.

To create the project, first, you need to have access to the cloud console.

create project

If you are running a free account you should select “individual” while creating the account itself. Free users get limited quota so if the number of projects that can be created is less than 30 you will get the message as shown above.

The project name is a human-readable name for simplicity while all the processing will be done using the project ID mentioned just below the project name.

You may be working in the console for more than one project so, it is always a good idea to check the project name while performing tasks.

To create a new project, use the gcloud projects create command:

gcloud projects create PROJECT_ID


prashantagcppaudel@cloudshell:~ (webproject-217416)$ gcloud projects create testdemotrial123
Create in progress for [].
Waiting for [operations/cp.6134727994789518289] to finish...done.

Where PROJECT_ID is the ID for the project you want to create. A project ID must start with a lowercase letter, and can contain only ASCII letters, digits, and hyphens, and must be between 6 and 30 characters.

To create a project with an organization(not for a free account)or a folder as a parent, use

--organization or --folder flags.

As a resource can only have one parent, only one of these flags can be used:

gcloud projects create PROJECT_ID --organization=ORGANIZATION_ID

gcloud projects create PROJECT_ID --folder=FOLDER_ID

To check the metadata of the project see the Dashboard of the project or use

# gcloud projects describe PROJECT_ID
root@cs-6000-devshell-vm-4663cdf8-e36b-46af-a81b-16d2c81e115c:/home/prashantagcppaudel# gcloud projects describe testdemotrial123
createTime: '2018-10-12T07:50:46.560Z'
lifecycleState: ACTIVE
name: testdemotrial123
projectId: testdemotrial123
projectNumber: '376521124726'


When a new project is created, the account used in GCP will automatically get the owner access. There are few standard user access types or say roles.

  1. Browser
  2. Editor
  3. Owner
  4. Viewer

After the project has been created, click on the project and then IAM and Admin to get to the Identity and access management page.

when you click ADD, you will get the option to add users to the project. Here you will see various options to include access for projects and billings. Select the project and then access type.

Google says:

With Cloud IAM, every Google Cloud Platform method requires that the account making the API request has appropriate permissions to access the resource. Permissions allow users to perform specific actions on Cloud resources. For example, the resourcemanager.projects.listpermission allows a user to list the projects they own, while resourcemanager.projects.deleteallows a user to delete a project.

The following table lists the permissions that the caller must have to call a projects API:

You don’t directly give users permissions; instead, you grant them roles, which have one or more permissions bundled within them.

You can grant one or more roles on the same project. When using theresourcemanager.projects.getIamPolicy() method to view permissions, only the permissions assigned to the project itself will appear, not any inherited permissions.


The following table lists the roles that you can grant to access a project, the description of what the role does, and the permissions bundled within that role.


To protect any owners/administrator accidentally delete the project GCP has a feature called liens. You can use liens in the project to block projects deletion until it is revoked back. The easiest approach to using liens is the gcloud shell

Putting liens

To place a lien on a project, a user must have the resourcemanager.projects.updateLienspermission which is granted by the roles/owner and roles/resourcemanager.lienModifierroles.

gcloud alpha resource-manager liens create \
  --restrictions=resourcemanager.projects.delete \
  --reason="Super important production system"

The available parameters to liens create are:

  • --project – The project the lien applies to.
  • --restrictions – A comma-separated list of IAM permissions to block.
  • --reason – A human-readable description of why this lien exists.
  • --origin – A short string denoting the user/system which originated the lien. Required, but the gcloud tool will automatically populate it with the user’s email address if left out.

At present, the only valid restriction for a project is resourcemanager.projects.delete.


To list liens applied to a project, a user must have the resourcemanager.projects.get permission. Use the liens list gcloud command.

gcloud alpha resource-manager liens list

Here is some example output for this command:

gcloud alpha resource-manager liens list
NAME                                                  ORIGIN            REASON
p1061081023732-l3d8032b3-ea2c-4683-ad48-5ca23ddd00e7  testing

If you try to delete the project then you will be presented with an error message


To remove a lien from a project, a user must have the resourcemanager.projects.updateLienspermission which is granted by roles/owner and roles/resourcemanager.lienModifier.

gcloud alpha resource-manager liens delete [LIEN_NAME]
gcloud alpha resource-manager liens delete p925736366706-lb2d80913-a41b-47a4-b4af-799489f09f96
Deleted [liens/p925736366706-lb2d80913-a41b-47a4-b4af-799489f09f96].


  • [LIEN_NAME] is the name of the lien to be deleted.


Google Cloud Platform offers Cloud Identity and Access Management (IAM), which lets you assign granular access to specific Google Cloud Platform resources and prevents unwanted access to other resources. IAM lets you control who (users) has what access (roles) to which resources by setting IAM policies on the resources.

You can set an IAM policy at the organization level, the folder level, the project level, or (in some cases) the resource level. Resources inherit the policies of the parent node. If you set a policy at the Organization level, it is inherited by all its child folders and projects, and if you set a policy at the project level, it is inherited by all its child resources.

The effective policy for a resource is the union of the policy set on the resource and the policy inherited from its ancestors. This inheritance is transitive. In other words, resources inherit policies from the project, which inherit policies from the organization. Therefore, the organization-level policies also apply at the resource level.

For example, in the resource hierarchy diagram above, if you set a policy on folder “Dept Y” that grants Project Editor role to, then Bob will have editor role on projects “Dev GCP,” “Test GCP,” and “Production.” Conversely, if you assign the Instance Admin role on project “Test GCP”, she will only be able to manage Compute Engine instances in that project.

The IAM policy hierarchy follows the same path as the GCP resource hierarchy. If you change the resource hierarchy, the policy hierarchy changes as well. For example, moving a project into an organization will update the project’s IAM policy to inherit from the organization’s IAM policy. Similarly, moving a project from one folder to another will change the inherited permissions. Permissions that were inherited by the project from the original parent will be lost when the project is moved to a new folder. Permissions set at the destination folder will be inherited by the project as it is moved.


If a new user wants to have access to the resources he should already posses G-Suit or Cloud identity account in order to access those resources. G-Suit generally represents a company and is prerequisite to access organization resources.

Google says:

The G Suite or Cloud Identity account represents a company and is a prerequisite to have access to the Organization resource. In the GCP context, it provides identity management, recovery mechanism, ownership and lifecycle management. The picture below shows the link between the G Suite account, Cloud Identity, and the GCP resource hierarchy.

The G Suite super admin is the individual responsible for domain ownership verification and the contact in cases of recovery. For this reason, the G Suite super admin is granted the ability to assign Cloud IAM roles by default. The G Suite super admin’s main duty with respect to GCP is to assign the Organization Administrator IAM role to appropriate users in their domain. This will create the separation between G Suite and GCP administration responsibilities that users typically seek.

GCP users are not required to have an Organization resource. A user acquires an Organization resource only if they are also G Suite or Cloud Identity customers. The Organization resource is closely associated with a G Suite or Cloud Identity account. Each G Suite or Cloud Identity account may have exactly one Organization provisioned with it. Once an Organization resource is created for a domain, all GCP projects created by members of the account domain will by default belong to the Organization resource.


This user has not been associated with any project or G-Suit.

Now I am going to give access to the project from another user as shown below. This project has one compute engine and 1 cloud storage.

Add user as shown previously

Here I have given access to the project as a whole so it should be accessible from the first user.

The invited user will get an email asking for accepting invite as shown below

when the link is clicked you will be redirected to the console and the project will be displayed as shown below.

Also, notice that the instances under the project are also listed in the resources available to the user.

The second user can create, delete and modify instances if he has proper access.

If accidentally you remove yourself from the project and there are no other users in the project, then you are locked out of the project.


All the API’s are accessible from console page>API & Services page. after clicking API & Services you will be presented with the API Dashboard.

To Enable new API click on Disable button at rightmost of each row.

You can also select from the library of API’s listed under the Library section.

Another important part is credentials, to use any API you need to have valid credentials. To find the keys for authenticating uses of API you can add and remove credentials from the list.

Google says:


Project owners can restore a deleted project within the 30-day recovery period that starts when the project is shut down. Restoring a project returns it to the state it was in before it was shut down. Cloud Storage resources are deleted before the 30-day period ends, and may not be fully recoverable.

Some services might need to be restarted manually. For more information, see Restarting Google Cloud Platform Services.

To restore a project:

  1. Go to the Manage Resources page in the Google Cloud Platform Console.
  3. In the Organization drop-down in the upper left, select your organization.
  4. Below the list of projects, click Resources pending deletion.
  5. Check the box for the project you want to restore, then click Restore. In the dialog that appears, confirm that you want to restore the project.


First, let’s check what is stack driver


Get the most out of your free Workspace by installing the Stackdriver Monitoring and Logging agents on each of your VM instances. Agents collect more information from your VM instances, including metrics and logs from third-party applications:

  1. Switch to the terminal connected to your VM instance, or create a new one.
  2. Install the Stackdriver agents by running the following commands on your instance:
# To install the Stackdriver monitoring agent:
$ curl -sSO
$ sudo bash
# To install the Stackdriver logging agent:
$ curl -sSO
$ sudo bash

I install one VM and installed stack driver monitoring and agent in it as shown in the command above.

Now go to GCP console and then to stackdriver>monitoring. You will be presented with welcome page

Now we need to add users for monitoring. Adding the users with various roles is pretty forward for stack driver. We don’t have to validate separately different users but add in IAM & services as adding the new user to the project but have to select Stackdriver and Stackdriver roles and save the configuration.

After adding the user, the access given will allow the user to view/edit/delete/ monitoring new services/apps.

So, we finished 1.1 of the GCP Certification.



Go to the profile of Prashanta Paudel

Prashanta PaudelOct 14

Google cloud platform can be used in various ways through API, cloud shell and cloud console. Most common method of getting your hands dirty while learning the Google cloud platform is Cloud Console while someone with scripting or programming knowledge may wish to use through the shell.


Cloud Console is the visual way of working with the cloud. It has almost all the features that cloud SDK as a virtual machine gives to the admin. Console in the gateway to all the features of Google cloud and is very handy in a way it is organized visually.

GCP Console

The top left portion of the console has a menu icon which when clicked will slide a list of menus from left. Even though the interface of GCP has been changing frequently the place for menu icon and personal profile, notifications, cloud shell has more or less remained in some spots.

When you click on the menu icon(three small lines), you will get a list of sub-menus. by default, you will have a home button and products. Home button will show the dashboard of your GCP project selected besides Top menu.

project selector

Sub-menu consists of all the features in GCP. This list is the most changing list on the whole console page. Some of the features are in beta phase and some merged in another menu while some appear in the main list.

In the main list, there is an option to pin the menu below home so that you don’t need to scroll down all the time.

Search option in the middle of the top menu bar gives easy access to the search function for the users. You can search for any project, deployment, marketplace etc from this place. If you are fast typer then this could be much easier than going through the main menu.


The top right items are profile, settings, notification, help, feedback and cloud console.

Top-right menu

All other buttons are general except notification and cloud shell.

Notification is like the log of the activities you perform in GCP. It will change its appearance when GCP is doing something.

creating a project notification


It is another way of using GCP. The important thing to note while using the shell is that you must have prior experience working with the shell. Although shell has help and manuals for understanding its commands, it is time-consuming to go through all the options and to apply properly structured command if the same thing can be carried out from GUI easily.

You can access cloud shell from the top menu bar by clicking the command prompt like icon.

After clicking the icon a new virtual machine with cloud SDK will be initialized and loaded into the platform.

Cloud shell provides command line environment to access cloud resources directly from the browser. It is equivalent to downloading, installing and connecting to Google Cloud SDK in your computer.

Cloud shell also has options at the top menu bar.

The first option will let you install chrome extension for sending the key combination

First option

The second icon will give an option for changing the appearance of the shell as well as copy and keyboard settings

The third option will let you add another session to the shell.

The last icon with three dots when pressed has several functions like uploading the files, restarting shell, checking statistics etc.

last option

If you want to use the interactive shell for your project then you can use it using the command

#gcloud beta interactive


Cloud Shell gives 5GB of persistent disk storage in your home directory.

You must load the project in the cloud shell to start using resources in it. We do it using the command

$ gcloud config set project Project_ID

if we apply commands for different project things may really go wrong.

If you use root privileges the project selection should be confirmed again.

Always check which project are you working on by seeing the project ID after username@cloudshell~(PROJECT _ID)$.



Go to the profile of Prashanta Paudel

Prashanta PaudelOct 15

Managing and configuring billing accounts and associating projects with billing account is another important thing you should be careful while using GCP.

Most importantly you cannot even start to use google cloud platform even in free account or trial without giving billing information.

However, you can change the billing method in How you pay. The method of payment depends on the presence of the google in that country. For some country like the USA, there are several methods including bank transfer and invoice whereas for some just card payments are possible.

Google says: Available payment methods are determined by country and the type of payment selected in “How you pay”

Here it would be relevant to mention the free account features. The free account requires Gmail address and the user must have credit card info with them before setting up the free account. The free account has the following features.

For GCP Certification in Associate level, you must know the following things

  • Creating one or more billing accounts.
  • Linking projects to a billing account.
  • Establishing billing budgets and alerts.
  • Setting up billing exports to estimate daily/monthly charges.

So, we check each point one by one.


By default when you start using GCP you will have one billing account as without it you cannot use GCP. You may also add another billing account if you want to manage it in a way that a certain instance is billed separately.

For example, if your company have three projects running simultaneously then having three billing account will give flexibility for identifying cost and managing account and expenses.

To create a billing account you must be a billing administrator. By default owner of the project has billing administrator rights.

To create another billing account

  1. Open the console and click on the main menu and select billing from the sliding menu.
select billing

You will be presented with a Dashboard that consists of billing information


If you are running in trial phase you will see amount and days remaining in your trial period as shown in 1

You must always be careful for which project the information is shown as in 2

We can easily modify and delete billing account from options in 3

Now to add a new billing account click My billing account drop-down menu and click on manage billing accounts

manage billing accounts

You will see the list of billing account in the page shown

Click on “Create account” button then you will start creating a new billing account. The first step consists of giving the name to the billing account

Then you will be asked to state your country and currency will be selected automatically

Now you need to set up a billing profile with all the information like account type, Tax information, Name and address, Primary Contact, How you pay and Payment method.

After filling all the information your billing account will be ready to be associated with project or instances.

Remember, creating new billing account only doesn’t finish the setup you need to associate the project with it.

You need to verify your email address to receive billing related emails to your address.

You can modify the billing address by going to

You can see the projects under the billing account by going to My projects in billing dashboard

Projects under billing

You cannot manage all aspects of billing until you have billing administrator rights.

As a user, you can associate projects with the billing account. To add a user as billing administrator goto IAM & Services and then select the account you want to authorize then select project billing administrator.


First goto Billing dashboard, then go to My projects.

Here you will see the list of projects in your account. click on three dots at the appropriate project and click on Change billing account. Now you can change the billing account if you have multiple billing accounts associated in your GCP.

change billing account


To set up billing budget and alerts go to the main menu, then to Billing Dashboard

Click on Budget and alerts at the left side of the screen.

You will see the screen like below

budgets and alerts

Click on “Create budget”

Enter the name of the Budget, it can be anything which you can remember and relate easily.

Now select the Budget associated with the project. Then select the budget amount. Remember you have the option to select billing account and project but billing accounts are associated with the project so selecting billing account will be easier to manage.

Projects ->billing account ->Budget -> Alerts

Checking Cost after credit will disable alerts before your credit i.e free trial credit run out.

Now you can select on how much percentage of the budget spent you want a notification to appear.

You can also use pub/sub for the alert based on other preferences. select it if you have more idea on pub/sub, otherwise, leave as it is.

After clicking Save, you will see the budget and alert currently active in GCP.

Here I have set a budget of 5 euro and it is spent about 50%.


Billing data can be a very important aspect in project management and for finance officers to determine the costs of various instances.

Billing can be exported from GCP in two ways.

  1. File export
  2. Big Query Export

File Export will store the automatically generated billing into the bucket in CSV or JSON format. It will use a service account to authenticate traffic for reading/write access.

file export

For this to work you must have created bucket beforehand.

When you clicked SAVE, the billing process will automatically be saved in the bucket.

Another option is Big Query Export. Big Query export will send the billing data directly to Big Query dataset.

First, you need to select the project in which billing account and Big Query is enabled. Then go to big query and create an empty dataset, select the same dataset in Billing Export dataset. Finally, click Save.

After you finish configuring you will see Enabled icon in Billing Export

Billing export

In this way, we finished 1.2 of Course related to Billing activities.


Go to the profile of Prashanta Paudel

Prashanta PaudelOct 15

We had a brief discussion about GCP command line interface in our blog some time ago. Please go through it before jumping directly into the topic.GCP Learning Series: Cloud Console and Cloud Shell
Google cloud platform can be used in various ways through API, cloud shell and cloud console. Most common method of…

In this blog, we will go through the installation of Cloud SDK in local computer/laptop. We will also see how you can install cloud SDK in compute engine inside the Google Cloud Platform.

Clod SDK is a set of tools in GCP for managing the cloud. SDK basically contains gcloud,gsutil and bq commands that can be used to access and manage the compute engine, cloud storage, big query, and other product and services from command line environment. You can also use automated scripts for managing these instances.

managing virtual machines from gcloud is the easiest way to perform various tasks in VM.

gcloud can also be used to manage networks, firewalls, storage, and more without having to use the console. With gcloud, managing configurations for your Compute Engine environment are just a few keystrokes away.

Cloud SDK covers a wide range of services in GCP.


You can use various commands to manage all the products in GCP.

gcloud Tool

gcloud manages authentication, local configuration, developer workflow, and interactions with the Cloud Platform APIs.

gsutil Tool

gsutil provides command line access to manage Cloud Storage buckets and objects.

Powershell cmdlets (Windows)

Google Cloud Tools for PowerShell is a collection of Windows PowerShell cmdlets for managing Google Cloud Platform resources within the Windows PowerShell environment.

bq Tool

bq allows you to run queries, manipulate datasets, tables, and entities in BigQuery through the command line.

kubectl Tool

kubectl orchestrates the deployment and management of Kubernetes container clusters on gcloud.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — —

You can install cloud SDK in the following systems

a. Linux

b. Mac OS

c. Windows

Along with this, you may also use browser-based SDK which gives you 5 GB of persistent disk.

In all cases, the main motive is to use commands available in SDK from different platforms.

Let us install Cloud SDK in Linux.


You don’t need to download manually and install cloud SDK in Linux.

You should install repository prior to installing SDK.

The command for installing repo is :

sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM


name=Google Cloud SDK baseurl= enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey= EOM

After that command for installing the cloud SDK is

#yum install google-cloud-sdk

Additional components that contain dependencies for various commands are also available in the SDK, but not installed by default.

additional component

While using other package managers Cloud SDK can be packaged with the additional element

other components

For example, the google-cloud-sdk-app-engine-java component can be installed as follows:

#yum install google-cloud-sdk-app-engine-java


The process to install Cloud SDK in Debian based Linux is similar to the red hat. This package contains the gcloudgcloud alphagcloud betagsutil, and bq commands only.

It does not include kubectl or the App Engine extensions required to deploy an application using gcloud commands

  1. Create an environment variable for the correct distribution:
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)"

2. Add the Cloud SDK distribution URI as a package source:

echo "deb $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list

Note: If you have apt-transport-https installed, you can use “https” instead of “HTTP” in this step.

3. Import the Google Cloud public key:

Troubleshooting Tip: If you are unable to get the latest updates due to an expired key, obtain the latest apt-get.gpg key file.

4. Update and install the Cloud SDK:

  • sudo apt-get update && sudo apt-get install google-cloud-sdk

Note: For additional options, such as disabling prompts or dry runs, refer to the apt-get man pages.

5. Optionally, install any of these additional components:

  • google-cloud-sdk-app-engine-python
  • google-cloud-sdk-app-engine-python-extras
  • google-cloud-sdk-app-engine-java
  • google-cloud-sdk-app-engine-go
  • google-cloud-sdk-datalab
  • google-cloud-sdk-datastore-emulator
  • google-cloud-sdk-pubsub-emulator
  • google-cloud-sdk-cbt
  • google-cloud-sdk-cloud-build-local
  • google-cloud-sdk-bigtable-emulator
  • kubectl

6. For example, the google-cloud-sdk-app-engine-java component can be installed as follows:

  • sudo apt-get install google-cloud-sdk-app-engine-java

7. Run gcloud init to get started:

  • gcloud init


There are two approaches to installing cloud SDK on windows

  1. Download the Cloud SDK installer. The installer is signed by Google Inc.
  2. Launch the installer and follow the prompts.

Cloud SDK requires Python 2 with a release version of Python 2.7.9 or later. The installer will install all necessary dependencies, including the needed Python version, by default. If you already have Python 2.x.y installed and want to use the existing installation, you can uncheck the option to install Bundled Python.

  1. After installation has completed, accept the following options:
  • Start Cloud SDK Shell
  • Run gcloud init

2. The default installation does not include the App Engine extensions required to deploy an application using gcloud commands.

You can also install the latest version from a downloaded .zip file:

  1. Download and extract its contents. (Right click on the downloaded file and select Extract All.)
  2. Launch the google-cloud-sdk\install.bat script and follow the installation prompts.
  3. When the installation finishes, restart the command prompt (cmd.exe).
  4. Run gcloud init:

C:\> gcloud init

Web-Browser Shell

After you are logged into the cloud console click on the command prompt icon in the right top of the page which will lunch cloud shell with cloud SDK pre-installed. This is the best way to use google cloud SDK.

Sometimes it may take a while to lunch depending on speed and browser but once loaded you can use all the tools like gcloud, gsutil, bq etc

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — –

Of all the tools and methods browser-based SDK seems to be the fastest and easiest way to work in GCP.


Now lets look at the commands in SDK.

Google says:


gcloud commands have the following release levels:

Release levelLabelDescriptionGeneral AvailabilityNoneCommands are considered fully stable and available for production use. Advance warnings will be made for commands that break current functionality and documented in the release notes.BetabetaCommands are functionally complete, but may still have some outstanding issues. Breaking changes to these commands may be made without notice.AlphaalphaCommands are in early release and may change without notice.PreviewpreviewCommands may be unstable and may change without notice.

The alpha and beta components are not installed by default when you install the SDK. You must install theseseparately using the gcloud components install command. If you try to run an alpha or beta command and the corresponding component is not installed, gcloud will prompt you to install it.

gcloud and gsutil commands are listed below.

gcloud commands

We will not go into deep in all the commands mentioned above but list some of them which are used frequently.

#gcloud --help -------> help page
#gcloud -h --------------> Help for structuring command
#gcloud project create PROJECT_ID ---------> create new project
#gcloud project describe PROJECT_ID
#gcloud projects list -----------------> list projects
#gcloud project delete PROJECT_ID
#gcloud config set project PROJECT_ID ------> set project in shell

You can see from the commands that google has structured commands in a way it is easier to do rather than remembering whole commands. There is always possible to find next portion of command by typing -h.

GCP’s New interactive CLI

To install interactive CLI in your default web browser shell.

$ gcloud components install alpha
$ gcloud alpha interactive

In this way we installed , configure and use various commands using google cloud SDK in CLI mode.


Go to the profile of Prashanta Paudel

Prashanta PaudelOct 16

From the early days, since I started learning cloud, I have heard about Google making the pricing of its products in the cloud very competitive and scientific. Today we will dig deeper into what it is now and how the pricing is done in Google cloud.

Google says it has 60% savings in cost compared to other vendors.

GCP compared to others

These savings are achieved by

  1. Sustained use discount: meaning that running the instances for a significant portion of the billing month will reduce the cost of an instance. More your instances are running bigger discount you will get.
  2. List price differences: Google is reducing the price of services and instances according to Moore’s law and immediately after the price of any instance or services are reduced they are applied in your monthly billing. This way you always get discounted rate than the standard pricing of that item.
  3. Rightsizing recommendations: Google cloud provides recommendations for compute engine based on the 8 days stack driver monitoring to resize the VM. If properly followed will result in savings up to 15% in total price for a month.

Google says: ”Compute Engine provides machine type recommendations to help you optimize the resource utilization of your virtual machine instances. These recommendations are generated automatically based on system metrics gathered by the Google Stackdriver Monitoring service over the previous 8 days. Use these recommendations to resize your instance’s machine type to more efficiently use the instance’s resources. This feature is also known as Rightsizing Recommendations.”

The important thing to note here is that Google doesn’t have any third party payment services. All the billings are on the platform and directly with Google. This largely avoids the hassle for users who want to use the services but are afraid of being fooled by the third party. There are no additional discounts other than calculated by the system and customers don’t need to ask for a discount, they are included automatically. This avoids the need for bargaining of the services which is profound in other IT services.

Google has no upfront cost in using their basic product and services in GCP.

After setting up GCP account and billing you get a startup amount and if you plan to use GCP you will not be billed until the free balance and time exceed the limit set on the free tier.

You don’t have to deposit any amount for using GCP products and services. Payment usually is done once a month based on usage.

You can terminate the account anytime you want.No termination fees are applied.

Another thing to note is using preemptive VM instances will reduce the cost of the GCP as Google has a special discount for these kinds of machines.

Now, Google will calculate the price in per second basis. This is especially useful for those instances that last for a few seconds and destroys itself. For example, a file processing system where one file is uploaded and processed and downloaded in the client machine.

Data that are not frequently in use can be stored in the cold line storage medium which is much cheaper than persistent or SSD. The advantages of Google over long-term storage media are it can be accessed as normal storage and don’t have to wait for hours.

A user has the choice of using any type of machine with any configuration which has clear cost benefits over fixed type machine from other vendors.

Users also get a discount on Committed use over some time period.


For convenience, Google has built a service where customers can predict the price of using instances and services in Google Cloud Platform. This service is called Pricing calculator.

You can access this webpage inGoogle Cloud Platform Pricing Calculator | Google Cloud Platform | Google Cloud
Create your own Custom Price Quote for the products offered through Google Cloud Platform based on a number, usage, and…

Before directly jumping into the website you should already know

  1. what you are building?
  2. where you are hosting?
  3. what components are required for that system?

Then you can list all those instances and provide it to pricing calculator that will give you the tentative amount per month or the period you select.

For example, I am hosting a dynamic website in Finland which will be accessible through the world. I will have a database for storing the information from the form in the website which will be name and address of volunteers.

So, now let’s list what instances we are talking about.

  1. Compute instance>VM>Linux machine, Debian ,Ubuntu>10GB persistent HD, 2 core.
  2. Database>Cloud SQL>server, dual-core, 10GB

These are just examples, pieces of information required in pricing calculator may be more detailed. Now put these in the pricing calculator

So, I put all these selections in the price calculator which gives me an estimated monthly bill of $114.04 per month.

Here you see a 30% reduction due to sustained use in both products.

So, estimating price is very easy but you should know in detail all the items that could contribute to the total price.


Go to the profile of Prashanta Paudel

Prashanta PaudelOct 16

One of the most important aspects in GCP is it’s Compute resources. In this blog, we will see how we do the planning and configuration in compute resources.

It should be clear by now that we don’t plan a compute resource alone but we plan a system or services that will need compute resources based in google cloud platform.How to Use the Google Cloud Platform | Solutions Gallery | Google Cloud
Search the GCP documentation for tutorials and

This website lists various solutions and what infrastructure are used for implementing that system.

We can choose from a range of options to use for the particular problem in hand. It may be sometimes possible that the same problem can be solved by using 10 different methods.

The use of the particular google computing option is ruled by your need and features available in that product.

Google has mentioned some of the needs and features and common use cases.

The screenshots have been taken from google cloud website.

Compute Engine use cases
Kubernetes use cases
App Engine Use cases


Google compute engine lets users create and run Virtual machines in Google Cloud Platform. It offers scale, performance, and value that allows us to easily launch a large compute cluster on Google’s infrastructure. Using compute engine doesn’t require any upfront cost. Compute Engine’s tooling and workflow support enables scaling from single instances to global, load-balanced cloud computing.

Compute engine’s VM come with various types of disks including persistent disk and SSD. You also have the ability to build custom VM with the amount of disk and memory you choose. On top of that, you will get a discount if you run it for a long period of time.

You can create a large cluster of compute resources and connect them to other data centers with fast and efficient google network.

Google Compute Engine Consists of Following


So, it is obvious you have to be very careful while selecting any solution based on the elements of the compute engine.


An instance is a virtual machine (VM) hosted on Google’s infrastructure. You can create an instance by using the Google Cloud Platform Console or the command-line tool.


Compute engine instances can run the public images for Linux and windows that Google explicitly provide on their platform as well as you can create or import from your existing systems. You can also deploy Docker containers, which are automatically launched on instances running the Container-Optimized OS public image. During creation, you can select no of virtual CPUs and memory by using a set of predefined machine types or by creating your own custom machine type.


Instances belong to GCP console project and a project can have multiple instances. You have to define zone, OS and machine type while creating instances. When you delete the instance, it is removed from the project.


Each instance has a persistent disk where the operating system is installed. If required another disk can be attached to the instance.


Each compute instance when created have one VPC(virtual private cloud). Up to 5 VPC can be added to the project. Instances in the same VPC communicate in local area network. Public IP is only used when it has to communicate outside the project.


Compute Engine instances support a declarative method for launching your applications using containers. When creating a VM or an instance template, you can provide a Docker image name and launch configuration. Compute Engine will take care of the rest including supplying an up-to-date Container-Optimized OS image with Docker installed and launching your container when the VM starts up.

Google says:


You can manage access to your instances using one of the following methods:

  • Linux instances:

1. Managing Instance Access Using OS Login, which allows you to associate SSH keys with your Google account or G Suite account and manage admin or non-admin access to the instance through IAM roles. If you connect to your instances using the command-line tool or SSH from the console, Compute Engine can automatically generate SSH keys for you and apply them to your Google account or G Suite account.

2. Manage your SSH keys in project or instance metadata, which grants admin access to instances with metadata access that does not use OS Login. If you connect to your instances using the command-line tool or SSH from the console, Compute Engine can automatically generate SSH keys for you and apply them to project metadata.

  • On Windows Server instances:

Create a password for a Windows Server instance


When implementing Virtual machines you are three options

  1. Upload a custom VM from your system.
  2. Use the standard VM’s available in the system
  3. Build a custom VM from available options

Typically defining a VM includes selecting memory size, Virtual CPUs, Disk Space etc.

So, let’s see the different options


Predefined machine types have a fixed collection of resources. These are managed by Google Compute Engine and come in 4 classes

1.Standard machine types

Standard machine types are suitable for tasks that have a balance of CPU and memory needs. Standard machine types have 3.75 GB of memory per vCPU.

You can have 1 to 96 vCPUs and 3.75 to 360 GB memory.

Persistent disk usage is charged separately from machine type pricing.

2.High-memory machine types

High-memory machine types are ideal for tasks that require more memory relative to vCPUs. High-memory machine types have 6.50GB of system memory per vCPU.

You can have 2 to 96 vCPUs and 13 to 624 GB Memory.

3. High-CPU machine types

High-CPU machine types are ideal for tasks that require more vCPUs relative to memory. High-CPU machine types have 0.90 GB of memory per vCPU.

You can have 2 to 96 vCPUs and 1.80 to 86.4 GB memory

4. Shared-core machine types

Shared-core machine types provide one vCPU that is allowed to run for a portion of the time on a single hardware hyper-thread on the host CPU running your instance. Shared-core instances can be more cost-effective for running small, non-resource intensive applications than standard, high-memory or high-CPU machine types.

f1-micro Bursting

f1-micro machine types offer bursting capabilities that allow instances to use additional physical CPU for short periods of time. Bursting happens automatically when your instance requires more physical CPU than originally allocated. During these spikes, your instance will opportunistically take advantage of available physical CPU in bursts. Note that bursts are not permanent and are only possible periodically.

Persistent disk usage is charged separately from machine type pricing.

5. Memory-optimized machine types

Memory-optimized machine types are ideal for tasks that require intensive use of memory with higher memory to vCPU ratios than high-memory machine types. These machines types are perfectly suited for in-memory databases and in-memory analytics, such as SAP Hana and business warehousing (BW) workloads, genomics analysis, SQL analysis services, and more. Memory-optimized machine types have greater than 14 GB of memory per vCPU.

See Regions and Zones to find where memory-optimized machine types are available.


If your requirement is not matching any of the predefined machine types then you can go for custom machine type.

Custom machine types are ideal for the following scenarios:

  • Workloads that are not a good fit for the predefined machine types that are available to you.
  • Workloads that require more processing power or more memory, but don’t need all of the upgrades that are provided by the next larger predefined machine type.

It costs slightly more to use a custom machine type than an equivalent predefined machine type, and there are still some limitations in the amount of memory and vCPUs you can select.


You can attach GPUs only to instances with a predefined machine type or custom machine type that you are able to create in a zone. GPUs are not supported on shared-core machine types or memory-optimized machine types.


Those instances that can be terminated anytime by compute engine are called preemptive VM instances. These VM’s can be implemented at a lower price than normal instances since compute engine has the right to kill it anytime.

These kinds of instances are best for fault tolerant application such as batch processing that doesn’t affect the system completely even if it is down or may continue later.

Preemptible instances function like normal instances, but have the following limitations:

  • Compute Engine might terminate preemptible instances at any time due to system events. The probability that Compute Engine will terminate a preemptible instance for a system event is generally low but might vary from day to day and from zone to zone depending on current conditions.
  • Compute Engine always terminates preemptible instances after they run for 24 hours.
  • Preemptible instances are finite Compute Engine resources, so they might not always be available.
  • Preemptible instances cannot live to migrate to a regular VM instance or be set to automatically restart when there is a maintenance event.
  • Due to the above limitations, preemptible instances are not covered by any Service Level Agreement (and, for clarity, are excluded from the Google Compute Engine SLA).


Compute Engine performs the following steps to preempt an instance:

  1. Compute Engine sends a preemption notice to the instance in the form of an ACPI G2 Soft Off signal. You can use a shutdown script to handle the preemption notice and complete cleanup actions before the instance stops.
  2. If the instance does not stop after 30 seconds, Compute Engine sends an ACPI G3 Mechanical Off signal to the operating system.
  3. Compute Engine transitions the instance to a TERMINATED state.

You can simulate an instance preemption by stopping the instance.


GCP Commands: gsutil

gsutil is a storage management command for google storage.Published on Dec 1·1 min read


This blog will demonstrate how to deploy containers using kubernetes engine in GCP.Published on Nov 30·2 min read


List of gcloud commands represented according to GCP functions.Published on Nov 29·1 min read


Basic ConceptsPublished on Nov 14·33 min read


OverviewPublished on Nov 14·25 min read


This is the last section in GCP Certification series as well as in the course. This section mostly deals about how to manage access and…Published on Nov 13·21 min read


This blog discusses managing data solutions including many GCP products and solutions.Published on Nov 12·24 min read


monitoring and loggingPublished on Nov 9·4 min read


Managing NetworkPublished on Nov 9·10 min read


Adjusting application traffic splitting parameters.Published on Nov 9·8 min read


We have already learn introductory and basic functions in earlier topic so we will directly go into doing tasks mentioned in syllabus.Published on Nov 7·4 min read


Today we are going to start section 4. This section basically deals with various tasks that you should be able to do in GCP for the…Published on Nov 6·24 min read


If we look closely the most important part of Cloud platform is its network infrastructure as all of the things from instances to API’s…Published on Nov 5·27 min read


You may already know that Google is a data company rather than IT company because most of its product depends on analysis of data from its…Published on Oct 31·21 min read


Deployment manager is a central place to the deployment of machines and instances either using the marketplace or using any other…Published on Oct 26·6 min read


Cloud Launcher is a place for selecting and implementing a prebuilt solution from the already available bundle into Google Cloud Platform.Published on Oct 26·2 min read


Google App Engine is a platform as a service(PaaS) from Google. It means that Google will provide all the infrastructure and software…Published on Oct 26·14 min read


Kubernetes is a hot topic in cloud computing at the moment. Companies are using containers for deployment rather than VM’s in the…Published on Oct 25·28 min read


Computers were developed primarily for serious calculation problems like population census which used to take 6–7 years to publish due to…Published on Oct 24·36 min read


Shifting from one Name server to another could be a messy task if you are not familiar with the configuration. In this blog, I will try to…Published on Oct 23·4 min read


Google is a global company and has the network all over the world. So, if you use google you can easily go global with few clicks. Even…Published on Oct 22·17 min readDAY 88 : 5th DEC 2018At last i got certified in first Attempt !

Main Certificate

Leave a comment

Your email address will not be published.