AWS Question

AWS Question

Professional Development

5 Qs

quiz-placeholder

Similar activities

Action Research

Action Research

University - Professional Development

10 Qs

AWS Questions EP3

AWS Questions EP3

Professional Development

6 Qs

Developing final

Developing final

Professional Development

5 Qs

Quizzz :-)

Quizzz :-)

KG - Professional Development

7 Qs

AWS Question EP 20?

AWS Question EP 20?

Professional Development

7 Qs

Developing Serverless

Developing Serverless

Professional Development

7 Qs

Study Hall #4 MongoDB

Study Hall #4 MongoDB

Professional Development

10 Qs

Integrasi Layanan Primer

Integrasi Layanan Primer

Professional Development

6 Qs

AWS Question

AWS Question

Assessment

Quiz

Science

Professional Development

Hard

Created by

vpmmff55s6 apple_user

Used 2+ times

FREE Resource

5 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

2 mins • 10 pts

A communication company has deployed several EC2 instances in region ap-southeast-1 which are used to monitor user activities. The AWS administrator has configured an EBS lifecycle policy to create a snapshot every day for each EBS volume to preserve data. The retention is configured as 5, which means the oldest snapshot will be deleted after 5 days. The administrator plans to copy some snapshots manually to another region ap-southeast-2 as these snapshots contain some important data. Can these snapshots be retained? 

The copied snapshots may be deleted after the retention period, as they are still affected by the retention policy


The copied snapshots can be kept only when they are copied to another region. Otherwise, they may be deleted by the retention policy. In this case, the snapshots can be kept

The copied snapshots can be kept as the retention schedule is not carried over to the copy

The copied snapshots in region ap-southeast-2 will be deleted after 5 days unless the delete protection option is enabled

Answer explanation

Correct Answer: C

Copying a snapshot to a new Region is commonly used for geographic expansion, migration, disaster recovery, etc.

EBS snapshots' lifecycle policies contain some rules. One of the rules is that when you copy a policy's snapshot, the new copy is not influenced by the retention schedule.

  • Option A is incorrect because the copied snapshots will be kept.

  • Option B is incorrect because no matter whether the copied snapshots are in the same region or not, they can be retained.

  • Option C is CORRECT because the new snapshots are not affected by the original policy.

  • Option D is incorrect because there is no delete protection option for snapshots.

Reference:

2.

MULTIPLE CHOICE QUESTION

2 mins • 10 pts

A fintech startup company is developing a product on the AWS platform. To speed up the development, the company plans to use a SaaS provided by AWS Marketplace. The SaaS provider already configured an AWS PrivateLink. In the company’s VPC, which configuration is required to utilize this private connection so that traffic flows to the service provider over private AWS networking rather than over the Internet?

In the VPC, configure an interface VPC endpoint for the SaaS which creates an elastic network interface in the subnet with a private IP address

Configure a site-to-site VPN connection in customer VPC for the SaaS to use the AWS private link connection

In the VPC, set up a gateway VPC endpoint for the SaaS which creates an elastic network interface in the subnet with an elastic IP address

In the VPC, create an AWS Direct Connect connection for the SaaS to securely connect with the AWS PrivateLink

Answer explanation

Media Image

Correct Answer: A

To use AWS PrivateLink, an interface VPC endpoint for a service in the VPC is required.

Option A is CORRECT: Because the interface VPC endpoint is essential to establish a secure connection to the private link.

  • Option B is incorrect: Because a site-to-site VPN connection is used for the VPN connection between AWS and the on-premise data center which is not suitable for this case.

  • Option C is incorrect: Because it should be an interface VPC endpoint rather than a gateway VPC endpoint. Secondly, the IP address in the elastic network interface should be private instead of elastic.

  • Option D is incorrect: Because an AWS Direct Connect connection is for the private connection between an on-premise server and AWS VPC. However, this question asks for a solution to communicate between the SaaS and AWS VPC.

  • Details can be checked in the AWS document https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-services-overview.html

3.

MULTIPLE CHOICE QUESTION

2 mins • 10 pts

A company has developed a sensor intended to be placed inside people's watches, monitoring the number of steps taken every day. There is an expectation of thousands of sensors reporting every minute and hopes to scale to millions by the end of the year. A requirement for the project is that it needs to accept the data, run it through ETL to store in the warehouse, and archive it on Amazon Glacier, with room for a real-time dashboard for the sensor data to be added at a later date. What is the best method for architecting this application given the requirements?


A.

Write the sensor data to Amazon S3 with a lifecycle policy for Glacier, create an EMR cluster that uses the bucket data, and run it through ETL. It then outputs that data into the Redshift data warehouse

Use Amazon Cognito to accept the data when the user pairs the sensor to the phone. Then have Cognito send the data to Dynamodb. Use Data Pipeline to create a job that takes the DynamoDB table and sends it to an EMR cluster for ETL, then outputs to Redshift and S3 while using S3 lifecycle policies to archive on Glacier


Write the sensor data directly to a scalable DynamoDB; create a data pipeline that starts an EMR cluster using data from DynamoDB and sends the data to S3 and Redshift

Write the sensor data directly to Amazon Kinesis, output the data into Amazon S3, and create a lifecycle policy for Glacier archiving. Also, have a parallel processing application that runs the data through EMR and sends it to a Redshift data warehouse

Answer explanation

Correct Answer: D

  • Option A is incorrect because S3 is not ideal for handling huge amounts of real-time requests.

  • Option B is incorrect because Amazon Cognito is not suitable for handling real-time data.

  • Option C is incorrect because DynamoDB is not suitable for data ingestion and handling.

  • Option D is CORRECT because the requirement is real-time data ingestion and analytics. The best option is to use Kinesis for storing real-time incoming data. The data can then be moved to S3 and then analyzed using EMR and Redshift. Data can then be moved to Glacier for archival.

More information about the use of Amazon Kinesis:

Amazon Kinesis is a platform for streaming data on AWS, making it easy to load and analyze streaming data, and also providing the ability for you to build custom streaming data applications for specialized needs.

  • Use Amazon Kinesis Streams to collect and process large streams of data records in real-time.

  • Use Amazon Kinesis Firehose to deliver real-time streaming data to destinations such as Amazon S3 and Amazon Redshift.

  • Use Amazon Kinesis Analytics to process and analyze streaming data with standard SQL.

More information about the use of Amazon Cognito:

Amazon Cognito lets you easily add user sign-up and sign-in and manage permissions for your mobile and web apps. You can create your own user directory within Amazon Cognito, or you can authenticate users through social identity providers such as Facebook, Twitter, or Amazon; with SAML identity solutions; or by using your own identity system. In addition, Amazon Cognito enables you to save data locally on users' devices, allowing your applications to work even when the devices are offline. You can then synchronize data across users' devices so that their app experience remains consistent regardless of the device they use.

Prev

4.

MULTIPLE CHOICE QUESTION

2 mins • 10 pts

A company owns an AWS Aurora MySQL global database. The global database is configured in two AWS regions. The primary AWS region is us-west-2 and the secondary AWS region is us-east-1. One day, there is an unexpected outage in the us-west-2 region. The Aurora global database is impacted and the primary Aurora DB cluster becomes unavailable. As an AWS cloud engineer, you need to quickly perform a cross-region failover to reconstruct the Aurora global database and recover the application from the regional outage. Which of the following approaches is correct?

In the Aurora global database, initiate a failover from the primary region (us-west-2) to the secondary region (us-east-1). Use the same database endpoint for the application

Detach the secondary DB cluster (us-east-1) from the Aurora global database. Reconfigure the application to send write operations to the DB cluster in us-east-1. Add another AWS region to the DB cluster

Detach the secondary DB cluster (us-east-1) from the Aurora global database. Configure the application to send write operations to the DB cluster in us-east-1. Restore the global database by adding the AWS region (us-west-2) back to the DB cluster as the secondary region

Stop all the write operations to the primary cluster. Restore the Aurora global database from the latest DB snapshot. Resume the write operations after the restoring is successful

Answer explanation

Correct Answer: B

  • Option​ ​A is ​incorrect because the application should use the new endpoint of the promoted Aurora DB cluster in us-east-1. Please check https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-disaster-recovery.html#aurora-global-database-failover for the details.

  • Option​ B is​ CORRECT because detaching the secondary DB cluster immediately stops the replication and promotes it to be a standalone DB cluster with full read/write capabilities. Reconfiguring the endpoint makes sure that the application uses the correct Aurora DB cluster.

  • Option​ ​C ​is​ ​incorrect because it should not add the AWS region us-west-2 back as the secondary region as there is an outage in us-west-2. It may cause the global database to fail synchronization or replication.

  • Option​ ​D ​is​ ​incorrect because restoring the database from the latest DB snapshot does not perform a cross-region failover. With this option, the global database will still have the outage in the primary us-west-2 region.

Reference:

5.

MULTIPLE CHOICE QUESTION

2 mins • 10 pts

You are writing an AWS CloudFormation template, and you want to assign values to properties that will not be available until runtime. You know that you can use intrinsic functions to do this but are unsure which part of the template they can use. Which of the following is correct in describing how you can currently use intrinsic functions in an AWS CloudFormation template?

You can use intrinsic functions in any part of a template

You can use intrinsic functions only in specific parts of a template. You can use intrinsic functions in resource properties, outputs, metadata attributes, and update policy attributes

You can use intrinsic functions only in the resource properties part of a template

You can use intrinsic functions in any part of a template, except AWSTemplateFormatVersion and Description

Answer explanation

Correct Answer: B

As per AWS documentation:

You can use intrinsic functions only in specific parts of a template. Currently, you can use intrinsic functions in resource properties, outputs, metadata attributes, and update policy attributes. You can also use intrinsic functions to create stack resources conditionally.

  • Hence, B is the correct answer.

For more information on intrinsic function, please refer to the below link.