Test 2

Test 2

Professional Development

50 Qs

quiz-placeholder

Similar activities

MEDIA MIDTERM23-2

MEDIA MIDTERM23-2

Professional Development

53 Qs

XI Unit-1 Emp CBSE

XI Unit-1 Emp CBSE

11th Grade - Professional Development

50 Qs

Job

Job

Professional Development

48 Qs

30 DIAMONDS , 17 THINGS, AND OTHER STUFF!

30 DIAMONDS , 17 THINGS, AND OTHER STUFF!

Professional Development

55 Qs

The Level 100 Riddles

The Level 100 Riddles

University - Professional Development

45 Qs

Stephanie's 50th Birthday Celebration

Stephanie's 50th Birthday Celebration

Professional Development

50 Qs

Object Oriented Programming

Object Oriented Programming

Professional Development

54 Qs

The HCC Quiz #4 (Long)

The HCC Quiz #4 (Long)

Professional Development

51 Qs

Test 2

Test 2

Assessment

Quiz

Life Skills

Professional Development

Easy

Created by

Ariel Cruz

Used 41+ times

FREE Resource

50 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated. What should you do?

Create a health check on port 443 and use that when creating the Managed Instance Group.

Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.

In the Instance Template, add the label 'health-check'.

In the Instance Template, add a startup script that sends a heartbeat to the metadata server.

Answer explanation

Create a health check on port 443 and use that when creating the Managed Instance Group. To ensure that unhealthy VMs are recreated, a health check should be created to monitor the instances in the managed instance group. This health check should be configured to check the appropriate endpoint for the web application, which in this case would be port 443 for HTTPS. If an instance is determined to be unhealthy, the instance group will automatically recreate it. https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs#setting_up_an_autohealing_policy

2.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

Your company has a Google Cloud Platform project that uses BigQuery for data warehousing. Your data science team changes frequently and has few members. You need to allow members of this team to perform queries. You want to follow Google-recommended practices. What should you do?

1. Create an IAM entry for each data scientist's user account. 2. Assign the BigQuery jobUser role to the group.

1. Create an IAM entry for each data scientist's user account. 2. Assign the BigQuery dataViewer user role to the group.

1. Create a dedicated Google group in Cloud Identity. 2. Add each data scientist's user account to the group. 3. Assign the BigQuery jobUser role to the group.

1. Create a dedicated Google group in Cloud Identity. 2. Add each data scientist's user account to the group. 3. Assign the BigQuery dataViewer user role to the group.

Answer explanation

Creating a dedicated Google group in Cloud Identity is a good practice because it simplifies user management. Rather than adding individual users to each resource's IAM policy, you can add the group to the resource's IAM policy. This way, you only need to manage the group membership rather than each user's permissions. Also, the BigQuery jobUser role provides the necessary permission to run queries and jobs, which is appropriate for data scientists who need to perform queries.

3.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

Media Image

Your company has a 3-tier solution running on Compute Engine. The configuration of the current infrastructure is shown below. GCP- Question53.png Each tier has a service account that is associated with all instances within it. You need to enable communication on TCP port 8080 between tiers as follows: * Instances in tier #1 must communicate with tier #2. * Instances in tier #2 must communicate with tier #3. What should you do?

1. Create an ingress firewall rule with the following settings: ג€¢ Targets: all instances ג€¢ Source filter: IP ranges (with the range set to 10.0.2.0/24) ג€¢ Protocols: allow all 2. Create an ingress firewall rule with the following settings: ג€¢ Targets: all instances ג€¢ Source filter: IP ranges (with the range set to 10.0.1.0/24) ג€¢ Protocols: allow all

1. Create an ingress firewall rule with the following settings: ג€¢ Targets: all instances with tier #2 service account ג€¢ Source filter: all instances with tier #1 service account ג€¢ Protocols: allow TCP:8080 2. Create an ingress firewall rule with the following settings: ג€¢ Targets: all instances with tier #3 service account ג€¢ Source filter: all instances with tier #2 service account ג€¢ Protocols: allow TCP: 8080

1. Create an ingress firewall rule with the following settings: ג€¢ Targets: all instances with tier #2 service account ג€¢ Source filter: all instances with tier #1 service account ג€¢ Protocols: allow all 2. Create an ingress firewall rule with the following settings: ג€¢ Targets: all instances with tier #3 service account ג€¢ Source filter: all instances with tier #2 service account ג€¢ Protocols: allow al

1. Create an egress firewall rule with the following settings: ג€¢ Targets: all instances ג€¢ Source filter: IP ranges (with the range set to 10.0.2.0/24) ג€¢ Protocols: allow TCP: 8080 2. Create an egress firewall rule with the following settings: ג€¢ Targets: all instances ג€¢ Source filter: IP ranges (with the range set to 10.0.1.0/24) ג€¢ Protocols: allow TCP: 8080

Answer explanation

Option b creates ingress firewall rules that allow communication between the instances in the different tiers on TCP port 8080, based on their associated service accounts. The first rule allows traffic from instances in Tier#1 with the Tier#1 service account to instances in Tier#2 with the Tier#2 service account. The second rule allows traffic from instances in Tier#2 with the Tier#2 service account to instances in Tier#3 with the Tier#3 service account. This ensures that only the appropriate instances can communicate with each other.

4.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

You are given a project with a single Virtual Private Cloud (VPC) and a single subnetwork in the us-central1 region. There is a Compute Engine instance hosting an application in this subnetwork. You need to deploy a new instance in the same project in the europe-west1 region. This new instance needs access to the application. You want to follow Google-recommended practices. What should you do?

1. Create a subnetwork in the same VPC, in europe-west1. 2. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint.

1. Create a VPC and a subnetwork in europe-west1. 2. Expose the application with an internal load balancer. 3. Create the new instance in the new subnetwork and use the load balancer's address as the endpoint.

1. Create a subnetwork in the same VPC, in europe-west1. 2. Use Cloud VPN to connect the two subnetworks. 3. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint.

1. Create a VPC and a subnetwork in europe-west1. 2. Peer the 2 VPCs. 3. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint.

Answer explanation

A is the correct answer because it follows Google's recommended practices of using a single VPC per project and creating a new subnetwork in the same VPC in the europe-west1 region. This allows the new instance to communicate with the existing instance using its private IP address as the endpoint.

5.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

Your projects incurred more costs than you expected last month. Your research reveals that a development GKE container emitted a huge number of logs, which resulted in higher costs. You want to disable the logs quickly using the minimum number of steps. What should you do?

1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE container resource.

1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE Cluster Operations resource.

1. Go to the GKE console, and delete existing clusters. 2. Recreate a new cluster. 3. Clear the option to enable legacy Stackdriver Loggin

1. Go to the GKE console, and delete existing clusters. 2. Recreate a new cluster. 3. Clear the option to enable legacy Stackdriver Monitoring.

Answer explanation

To disable the logs quickly using the minimum number of steps, you should go to the Logs ingestion window in Stackdriver Logging and disable the log source for the GKE container resource. This will prevent the GKE container from emitting logs, which will in turn reduce the amount of log data generated and lower the costs incurred.

6.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

You have a website hosted on App Engine standard environment. You want 1% of your users to see a new test version of the website. You want to minimize complexity. What should you do?

Deploy the new version in the same application and use the --migrate option.

Deploy the new version in the same application and use the --splits option to give a weight of 99 to the current version and a weight of 1 to the new version.

Create a new App Engine application in the same project. Deploy the new version in that application. Use the App Engine library to proxy 1% of the requests to the new version.

Create a new App Engine application in the same project. Deploy the new version in that application. Configure your network load balancer to send 1% of the traffic to that new application.

Answer explanation

By using the App Engine's traffic splitting feature, we can easily direct a certain percentage of traffic to a specific version of our application. In this case, we want to send 1% of traffic to the new test version and keep the remaining 99% on the current version. This can be achieved by deploying the new version in the same application and using the `--splits` option to give a weight of 99 to the current version and a weight of 1 to the new version.

7.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

You have a web application deployed as a managed instance group. You have a new version of the application to gradually deploy. Your web application is currently receiving live web traffic. You want to ensure that the available capacity does not decrease during the deployment. What should you do?

Perform a rolling-action start-update with maxSurge set to 0 and maxUnavailable set to 1.

Perform a rolling-action start-update with maxSurge set to 1 and maxUnavailable set to 0.

Create a new managed instance group with an updated instance template. Add the group to the backend service for the load balancer. When all instances in the new managed instance group are healthy, delete the old managed instance group.

Create a new instance template with the new application version. Update the existing managed instance group with the new instance template. Delete the instances in the managed instance group to allow the managed instance group to recreate the instance using the new instance template.

Answer explanation

We need to ensure the global capacity remains intact, for that reason we need to establish maxUnavailable to 0. On the other hand, we need to ensure new instances can be created. We do that by establishing the maxSurge to 1. Option C is more expensive and more difficult to set up and option D won't meet requirements since it won't keep global capacity intact.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?