Weekly Consultation 18 CC-39

Weekly Consultation 18 CC-39

University

30 Qs

quiz-placeholder

Similar activities

Practice Questions DP 300

Practice Questions DP 300

University

25 Qs

Google Prof Cloud Archi - pt 2

Google Prof Cloud Archi - pt 2

University

30 Qs

Google Prof Cloud Archi - pt 6

Google Prof Cloud Archi - pt 6

University

30 Qs

CertyIQ - Google - Prof Data Eng - pt 6

CertyIQ - Google - Prof Data Eng - pt 6

University

30 Qs

CertyIQ - Google - Prof Data Eng - pt 5

CertyIQ - Google - Prof Data Eng - pt 5

University

30 Qs

Cloud Computing

Cloud Computing

University

25 Qs

DBMS_UNIT1

DBMS_UNIT1

University

25 Qs

Google Prof Cloud Archi - pt 3

Google Prof Cloud Archi - pt 3

University

30 Qs

Weekly Consultation 18 CC-39

Weekly Consultation 18 CC-39

Assessment

Quiz

Computers

University

Medium

Created by

CC-39 Alexander

Used 1+ times

FREE Resource

30 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

An internal company application is deployed with Compute Engine VMs. This application is used only during regular business hours. You need to backup the VMs outside the business hours and remove images older than 30 days to reduce expenses. What should you do?

You should add three metadata tags on the Compute Engine instance (enabling snapshot creation, specifying the snapshot schedule, specifying the retention period = 30 days).

You should enable a snapshot schedule for automated creation of daily snapshots and set snapshot retention policy to 30 days.

You should use AppEngine Cron service to trigger a custom script that creates snapshots of the disk on a daily basis. Also you should use AppEngine Cron service to trigger another custom script that iterates over the snapshots and removes snapshots older than 30 days.

You should use Cloud Scheduler to trigger a Cloud Function that creates snapshots of the disk on a daily basis. Also you should use Cloud Scheduler to trigger another Cloud Function that iterates over the snapshots and removes older than 30 days.

Answer explanation

You should enable a snapshot schedule for automated creation of daily snapshots and set snapshot retention policy to 30 days. It uses the built-in Google Cloud snapshot scheduling and retention feature, which allows you to automate the creation of snapshots and the deletion of old snapshots, thus reducing manual effort and potential errors. In Google Cloud, you can create snapshot schedules for persistent disks, which allow automated creation of snapshots on a regular schedule. You can also set the snapshot retention policy to specify how long you want the snapshots to be retained (in this case, 30 days).

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You are deploying an application to App Engine and want to scale the number of instances based on request rate. You need at least 3 unoccupied instances all all time. What type of scaling should you use?

Automatic Scaling with min_idle_instances set to 3.

Manual Scaling with 3 instances.

Basic Scaling with max_instances set to 3.

Basic Scaling with min_instances set to 3.

Answer explanation

Automatic Scaling in Google App Engine allows your application to automatically adjust the number of instances it's using based on the traffic it's receiving. The min_idle_instances configuration parameter that you can set with Automatic Scaling specifies the minimum number of idle instances that App Engine should maintain for your application. Idle instances are those that are ready to handle incoming requests. By setting min_idle_instances to 3, you're ensuring that there are always at least 3 instances ready to handle incoming requests immediately, providing faster response times for users during sudden traffic spikes.

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You want to create a custom VPC with a single subnet. The range of the subnet must be as large as possible. What range should you use?

192.168.0.0/16

10.0.0.0/8

172.16.0.0/12

0.0.0.0/0

Answer explanation

10.0.0.0/8 is the largest private IPv4 network range as defined by RFC 1918. This range allows for up to 16,777,214 (2^24 - 2) usable IP addresses, which is larger than the other options. This is the most appropriate range to use for a single subnet where the goal is to maximize the available addresses.

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

A company is planning to migrate its existing on-premises infrastructure to the cloud. They have identified the following requirements: scalability, high availability, and cost optimization. Which of the following strategies would be most suitable for meeting these requirements?

Multi-region deployment: Deploying the infrastructure across multiple regions to ensure redundancy and fault tolerance.

Containerization: Migrating applications to container platforms for increased scalability, isolation, and portability.

Lift and shift migration: Moving the existing infrastructure as-is to the cloud without making significant architectural changes.

Serverless architecture: Developing applications using serverless services to achieve automatic scaling, high availability, and pay-as-you-go pricing.

Answer explanation

Serverless architecture is most suitable for meeting the requirements of scalability, high availability, and cost optimization. Serverless services, such as AWS Lambda or Azure Functions, automatically scale based on demand, ensuring scalability. They also provide built-in high availability and charge based on the actual usage, promoting cost optimization.

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

You are asked to deploy a web application so that it can scale based on the HTTP traffic. And also you have an instance template that contains this web application. What should you do?

You should create a virtual machine from the instance template. Then create an App Engine application in Automatic Scaling mode that forwards all traffic to this virtual machine.

You should create the necessary number of instances required for peak traffic based on the instance template.

You should create a Managed Instance Group based on the instance template. Then configure autoscaling based on HTTP traffic and configure the instance group as the backend service of an HTTP load balancer.

You should create a managed instance group based on the instance template. Then configure autoscaling based on CPU utilization.

Answer explanation

Managed Instance Groups in Google Cloud offer autoscaling, which allows the group to automatically add or remove instances based on increases or decreases in load, measured according to a chosen metric, in this case, HTTP traffic. This setup ensures that the application can scale up or down based on the amount of incoming traffic, maintaining performance while also controlling costs. Moreover, by configuring the MIG as the backend service of an HTTP(S) load balancer, incoming traffic can be evenly distributed across all instances in the group, ensuring high availability and reliable performance.

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Your organization is looking to implement a disaster recovery strategy that involves replicating data between your primary Google Cloud region and a secondary region. Which service would you use to ensure that your data is synchronized and that the secondary site can take over in case of a regional outage?

Utilize Cloud Storage with Multi-Regional buckets to store and replicate data.

Implement Cloud Spanner for automatic synchronous data replication across regions.

Set up a Regional Persistent Disk and replicate it across two regions.

Create a snapshot of your VMs and copy them to the secondary region periodically.

Answer explanation

Cloud Spanner is a fully managed relational database with built-in synchronous replication across regions, providing immediate consistency and failover capabilities, making it ideal for a disaster recovery setup.

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which Google Cloud service can be used to perform real-time stream processing and analytics on large volumes of data?

Cloud Functions

Cloud Dataflow

BigQuery

Cloud Pub/Sub

Answer explanation

Cloud Dataflow is a fully managed, serverless service for real-time stream processing and batch processing of data. It provides a unified programming model for building data pipelines and can handle large volumes of data with automatic scaling.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?