Google Professional Architect 7 - 121 to 140

Google Professional Architect 7 - 121 to 140

Professional Development

20 Qs

quiz-placeholder

Similar activities

Cloud Infra Day 2022

Cloud Infra Day 2022

Professional Development

15 Qs

AWS compute services.

AWS compute services.

Professional Development

15 Qs

IT (Information technology) - Adérito Neto PI0919

IT (Information technology) - Adérito Neto PI0919

10th Grade - Professional Development

20 Qs

Android Quiz

Android Quiz

10th Grade - Professional Development

25 Qs

Vehicle Telematics

Vehicle Telematics

12th Grade - Professional Development

15 Qs

Google Professional Architect 9 - 161 to 180

Google Professional Architect 9 - 161 to 180

Professional Development

20 Qs

AWS SAA - 16

AWS SAA - 16

Professional Development

20 Qs

AWS Quiz Show 2023 (Week 1)

AWS Quiz Show 2023 (Week 1)

Professional Development

20 Qs

Google Professional Architect 7 - 121 to 140

Google Professional Architect 7 - 121 to 140

Assessment

Quiz

Instructional Technology

Professional Development

Medium

Created by

George Wurthmann

Used 2+ times

FREE Resource

20 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

You need to deploy an application on Google Cloud that must run on a Debian Linux environment. The application requires extensive configuration in order to operate correctly. You want to ensure that you can install Debian distribution updates with minimal manual intervention whenever they become available. What should you do?
Create a Compute Engine instance template using the most recent Debian image. Create an instance from this template, and install and configure the application as part of the startup script. Repeat this process whenever a new Google-managed Debian image becomes available.
Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates.
Create an instance with the latest available Debian image. Connect to the instance via SSH, and install and configure the application on the instance. Repeat this process whenever a new Google-managed Debian image becomes available.
Create a Docker container with Debian as the base image. Install and configure the application as part of the Docker image creation process. Host the container on Google Kubernetes Engine and restart the container whenever a new update is available.

2.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

You have an application that runs in Google Kubernetes Engine (GKE). Over the last 2 weeks, customers have reported that a specific part of the application returns errors very frequently. You currently have no logging or monitoring solution enabled on your GKE cluster. You want to diagnose the problem, but you have not been able to replicate the issue. You want to cause minimal disruption to the application. What should you do?
1. Update your GKE cluster to use Cloud Operations for GKE. 2. Use the GKE Monitoring dashboard to investigate logs from affected Pods.
1. Create a new GKE cluster with Cloud Operations for GKE enabled. 2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster. 3. Use the GKE Monitoring dashboard to investigate logs from affected Pods.
1. Update your GKE cluster to use Cloud Operations for GKE, and deploy Prometheus. 2. Set an alert to trigger whenever the application returns an error.
1. Create a new GKE cluster with Cloud Operations for GKE enabled, and deploy Prometheus. 2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster. 3. Set an alert to trigger whenever the application returns an error.

3.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

You need to deploy a stateful workload on Google Cloud. The workload can scale horizontally, but each instance needs to read and write to the same POSIX filesystem. At high load, the stateful workload needs to support up to 100 MB/s of writes. What should you do?
Use a persistent disk for each instance.
Use a regional persistent disk for each instance.
Create a Cloud Filestore instance and mount it in each instance.
Create a Cloud Storage bucket and mount it in each instance using gcsfuse.

4.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

Your company has an application deployed on Anthos clusters (formerly Anthos GKE) that is running multiple microservices. The cluster has both Anthos Service Mesh and Anthos Config Management configured. End users inform you that the application is responding very slowly. You want to identify the microservice that is causing the delay. What should you do?
Use Anthos Config Management to create a ClusterSelector selecting the relevant cluster. On the Google Cloud Console page for Google Kubernetes Engine, view the Workloads and filter on the cluster. Inspect the configurations of the filtered workloads.
Use the Service Mesh visualization in the Cloud Console to inspect the telemetry between the microservices.
Use Anthos Config Management to create a namespaceSelector selecting the relevant cluster namespace. On the Google Cloud Console page for Google Kubernetes Engine, visit the workloads and filter on the namespace. Inspect the configurations of the filtered workloads.
Reinstall istio using the default istio profile in order to collect request latency. Evaluate the telemetry between the microservices in the Cloud Console.

5.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

You are working at a financial institution that stores mortgage loan approval documents on Cloud Storage. Any change to these approval documents must be uploaded as a separate approval file, so you want to ensure that these documents cannot be deleted or overwritten for the next 5 years. What should you do?
Create a retention policy on the bucket for the duration of 5 years. Create a lock on the retention policy.
Create the bucket with uniform bucket-level access, and grant a service account the role of Object Writer. Use the service account to upload new files.
Use a customer-managed key for the encryption of the bucket. Rotate the key after 5 years.
Create the bucket with fine-grained access control, and grant a service account the role of Object Writer. Use the service account to upload new files.

6.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

Your team will start developing a new application using microservices architecture on Kubernetes Engine. As part of the development lifecycle, any code change that has been pushed to the remote develop branch on your GitHub repository should be built and tested automatically. When the build and test are successful, the relevant microservice will be deployed automatically in the development environment. You want to ensure that all code deployed in the development environment follows this process. What should you do?
Have each developer install a pre-commit hook on their workstation that tests the code and builds the container when committing on the development branch. After a successful commit, have the developer deploy the newly built container image on the development cluster.
Install a post-commit hook on the remote git repository that tests the code and builds the container when code is pushed to the development branch. After a successful commit, have the developer deploy the newly built container image on the development cluster.
Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and stores it in Container Registry. Create a deployment pipeline that watches for new images and deploys the new image on the development cluster. Ensure only the deployment tool has access to deploy new versions.
Create a Cloud Build trigger based on the development branch to build a new container image and store it in Container Registry. Rely on Vulnerability Scanning to ensure the code tests succeed. As the final step of the Cloud Build process, deploy the new container image on the development cluster. Ensure only Cloud Build has access to deploy new versions.

7.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

Your operations team has asked you to help diagnose a performance issue in a production application that runs on Compute Engine. The application is dropping requests that reach it when under heavy load. The process list for affected instances shows a single application process that is consuming all available CPU, and autoscaling has reached the upper limit of instances. There is no abnormal load on any other related systems, including the database. You want to allow production traffic to be served again as quickly as possible. Which action should you recommend?
Change the autoscaling metric to agent.googleapis.com/memory/percent_used.
Restart the affected instances on a staggered schedule.
SSH to each instance and restart the application process.
Increase the maximum number of instances in the autoscaling group.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?