06/05/2024

06/05/2024

1st - 5th Grade

6 Qs

quiz-placeholder

Similar activities

Jobs of cells part 1

Jobs of cells part 1

4th - 10th Grade

11 Qs

plants

plants

4th Grade

10 Qs

26/03/2024

26/03/2024

1st - 5th Grade

6 Qs

Who We Are

Who We Are

KG - 2nd Grade

10 Qs

PRIVATE IPA - Sesi 23

PRIVATE IPA - Sesi 23

1st - 5th Grade

10 Qs

Science room rules

Science room rules

KG - 1st Grade

10 Qs

Amazon Region

Amazon Region

5th Grade

10 Qs

REPRO - Chapitre 2 (Le spermatozoïde)

REPRO - Chapitre 2 (Le spermatozoïde)

1st - 4th Grade

11 Qs

06/05/2024

06/05/2024

Assessment

Quiz

Science

1st - 5th Grade

Hard

Created by

Ben_ _Papuche

Used 1+ times

FREE Resource

6 questions

Show all answers

1.

MULTIPLE SELECT QUESTION

45 sec • 1 pt

The engineering team at a retail company is planning to migrate to AWS Cloud from the on-premises data center. The team is evaluating Amazon RDS as the database tier for its flagship application. The team has hired you as an AWS Certified Solutions Architect Associate to advise on RDS Multi-AZ capabilities.

Which of the following would you identify as correct for RDS Multi-AZ? (Select two)

For automated backups, I/O activity is suspended on your primary DB since backups are not taken from standby DB

RDS applies OS updates by performing maintenance on the standby, then promoting the standby to primary and finally performing maintenance on the old primary, which becomes the new standby

Amazon RDS automatically initiates a failover to the standby, in case primary database fails for any reason

Updates to your DB Instance are asynchronously replicated across the Availability Zone to the standby in order to keep both in sync

To enhance read scalability, a Multi-AZ standby instance can be used to serve read requests

Answer explanation

Correct options:

RDS applies OS updates by performing maintenance on the standby, then promoting the standby to primary, and finally performing maintenance on the old primary, which becomes the new standby

Running a DB instance as a Multi-AZ deployment can further reduce the impact of a maintenance event because Amazon RDS applies operating system updates by following these steps:

Perform maintenance on the standby.

Promote the standby to primary.

Perform maintenance on the old primary, which becomes the new standby.

When you modify the database engine for your DB instance in a Multi-AZ deployment, then Amazon RDS upgrades both the primary and secondary DB instances at the same time. In this case, the database engine for the entire Multi-AZ deployment is shut down during the upgrade.

Amazon RDS automatically initiates a failover to the standby, in case the primary database fails for any reason - You also benefit from enhanced database availability when running your DB instance as a Multi-AZ deployment. If an Availability Zone failure or DB instance failure occurs, your availability impact is limited to the time automatic failover takes to complete.

Another implied benefit of running your DB instance as a Multi-AZ deployment is that DB instance failover is automatic and requires no administration. In an Amazon RDS context, this means you are not required to monitor DB instance events and initiate manual DB instance recovery in the event of an Availability Zone failure or DB instance failure.

Incorrect options:

For automated backups, I/O activity is suspended on your primary DB since backups are not taken from standby DB - The availability benefits of Multi-AZ also extend to planned maintenance. For example, with automated backups, I/O activity is no longer suspended on your primary during your preferred backup window, since backups are taken from the standby.

To enhance read scalability, a Multi-AZ standby instance can be used to serve read requests - A Multi-AZ standby cannot serve read requests. Multi-AZ deployments are designed to provide enhanced database availability and durability, rather than read scaling benefits. As such, the feature uses synchronous replication between primary and standby. AWS implementation makes sure the primary and the standby are constantly in sync, but precludes using the standby for read or write operations.

Updates to your DB Instance are asynchronously replicated across the Availability Zone to the standby in order to keep both in sync - When you create your DB instance to run as a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous “standby” replica in a different Availability Zone. Updates to your DB Instance are synchronously replicated across the Availability Zone to the standby in order to keep both in sync and protect your latest database updates against DB instance failure.

Reference:

https://aws.amazon.com/rds/faqs/

2.

MULTIPLE CHOICE QUESTION

1 min • 10 pts

An e-commerce application uses a relational database that runs several queries that perform joins on multiple tables. The development team has found that these queries are slow and expensive, therefore these are a good candidate for caching. The application needs to use a caching service that supports multi-threading.

As a solutions architect, which of the following services would you recommend for the given use case?

Amazon DynamoDB Accelerator (DAX)

Amazon ElastiCache for Memcached

Amazon ElastiCache for Redis

AWS Global Accelerator

Answer explanation

Correct option:

Amazon ElastiCache for Memcached - Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store and cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.

Memcached is an open-source, distributed, in-memory key-value store that can retrieve data in milliseconds. Caching site information with Memcached can help you improve the performance and scalability of your site while controlling cost.

Choose Memcached if the following apply to you:

You need the simplest model possible.

You need to run large nodes with multiple cores or threads (support for multi-threading).

You need the ability to scale out and in, adding and removing nodes as demand on your system increases and decreases.

You need to cache objects.

 via - https://aws.amazon.com/elasticache/redis-vs-memcached/

Incorrect options:

Amazon ElastiCache for Redis - Redis, which stands for Remote Dictionary Server, is a fast, open-source, in-memory key-value data store for use as a database, cache, message broker, and queue. Redis now delivers sub-millisecond response times enabling millions of requests per second for real-time applications in Gaming, Ad-Tech, Financial Services, Healthcare, and IoT. Redis is a popular choice for caching, session management, gaming, leaderboards, real-time analytics, geospatial, ride-hailing, chat/messaging, media streaming, and pub/sub apps.

Redis does not support multi-threading, so this option is not the right fit for the given use case.

Amazon DynamoDB Accelerator (DAX) - Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon DynamoDB. DAX does not support relational databases.

AWS Global Accelerator - AWS Global Accelerator is a networking service that helps you improve the availability and performance of the applications that you offer to your global users. This option has been added as a distractor, it has nothing to do with database caching.

References:

https://aws.amazon.com/caching/aws-caching/

https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/elasticache-use-cases.html

https://aws.amazon.com/elasticache/redis-vs-memcached/

3.

MULTIPLE CHOICE QUESTION

1 min • 10 pts

A financial services company stores confidential data on an Amazon Simple Storage Service (S3) bucket. The compliance guidelines require that files be stored with server-side encryption. The encryption used must be Advanced Encryption Standard (AES-256) and the company does not want to manage the encryption keys.

Which of the following options represents the most cost-optimal solution for the given use case?

SSE-KMS

SSE-S3

Client Side Encryption

SSE-C

Answer explanation

Correct option:

SSE-S3

Using Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object is encrypted with a unique key employing strong multi-factor encryption. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data. There are no additional fees for using server-side encryption with Amazon S3-managed keys (SSE-S3).

Incorrect options:

SSE-C - You manage the encryption keys and Amazon S3 manages the encryption as it writes to disks and decryption when you access your objects.

Client-Side Encryption - You can encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.

SSE-KMS - Similar to SSE-S3 and also provides you with an audit trail of when your key was used and by whom. Additionally, you have the option to create and manage encryption keys yourself. Although SSE-KMS provides an option where AWS manages the encryption key on your behalf, however, this entails a usage fee for the KMS key. So this option is not the best fit for the given use case.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html

4.

MULTIPLE CHOICE QUESTION

1 min • 10 pts

A development team wants to ensure that all objects uploaded to an Amazon S3 bucket are encrypted?

Which of the following options represents the correct solution?

Configure the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set

Configure the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set to private

Configure the bucket policy to deny if the PutObject does not have an aws:SecureTransport header set to true

Configure the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set

Answer explanation

Correct option:

Configure the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set - Amazon S3 encrypts your data at the object level as it writes to disks in AWS data centers, and decrypts it for you when you access it. You can encrypt objects by using client-side encryption or server-side encryption. Client-side encryption occurs when an object is encrypted before you upload it to S3, and the keys are not managed by AWS. With server-side encryption, Amazon manages the keys in one of three ways:

  1. Server-side encryption with customer-provided encryption keys (SSE-C).

  2. SSE-S3.

  3. SSE-KMS.

Server-side encryption is about data encryption at rest—that is, S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects.

To encrypt an object at the time of upload, you need to add a header called x-amz-server-side-encryption to the request to tell S3 to encrypt the object using SSE-C, SSE-S3, or SSE-KMS.

In order to enforce object encryption, create an S3 bucket policy that denies any S3 Put request that does not include the x-amz-server-side-encryption header. There are two possible values for the x-amz-server-side-encryption header: AES256, which tells S3 to use S3-managed keys, and aws:kms, which tells S3 to use AWS KMS–managed keys.

Incorrect options:

Configure the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set to private - The x-amz-acl header is used to specify an ACL in the PutObject request. Access permissions are defined using this header.

Configure the bucket policy to deny if the PutObject does not have an aws:SecureTransport header set to true - By default, Amazon S3 allows both HTTP and HTTPS requests. aws:SecureTransport key is used to check if the request is sent through HTTP or HTTPS. When this key is true, it means that the request is sent through HTTPS.

Configure the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set - As discussed above, the s3:x-amz-acl header is used to set permissions on the specified S3 bucket and has nothing to do with encryption.

References:

https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/

https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazon-s3-policy-keys.html

5.

MULTIPLE CHOICE QUESTION

1 min • 10 pts

The CTO of an online home rental marketplace wants to re-engineer the caching layer of the current architecture for its relational database. The CTO wants the caching layer to have replication and archival support built into the architecture.

Which of the following AWS service offers the capabilities required for the re-engineering of the caching layer?

DocumentDB

Elasticache for Memcached

ElastiCache for Redis

DynamoDB Accelerator (DAX)

Answer explanation

Correct option:

ElastiCache for Redis

Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Amazon ElastiCache for Redis is a great choice for real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session store. ElastiCache for Redis supports replication and archival snapshots right out of the box. Hence this is the correct option.

Exam Alert:

Please review this comparison sheet for Redis vs Memcached features: 

 via - https://aws.amazon.com/elasticache/redis-vs-memcached/

Incorrect options:

ElastiCache for Memcached - Amazon ElastiCache for Memcached is a Memcached-compatible in-memory key-value store service that can be used as a cache or a data store. Amazon ElastiCache for Memcached is a great choice for implementing an in-memory cache to decrease access latency, increase throughput, and ease the load off your relational or NoSQL database. Session stores are easy to create with Amazon ElastiCache for Memcached. ElastiCache for Memcached does not support replication and archival snapshots, so this option is ruled out.

DynamoDB Accelerator (DAX) - Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multi-region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX cannot be used as a caching layer for a relational database.

DocumentDB - Amazon DocumentDB is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data. DocumentDB cannot be used as a caching layer for a relational database.

References:

https://aws.amazon.com/elasticache/redis/

https://aws.amazon.com/elasticache/redis-vs-memcached/

6.

MULTIPLE CHOICE QUESTION

1 min • 10 pts

A company wants to publish an event into an SQS queue whenever a new object is uploaded on S3.

Which of the following statements are true regarding this functionality?

Neither Standard SQS queue nor FIFO SQS queue are allowed as an Amazon S3 event notification destination

Only FIFO SQS queue is allowed as an Amazon S3 event notification destination, whereas Standard SQS queue is not allowed

Both Standard SQS queue and FIFO SQS queue are allowed as an Amazon S3 event notification destination

Only Standard SQS queue is allowed as an Amazon S3 event notification destination, whereas FIFO SQS queue is not allowed

Answer explanation

Correct option:

Only Standard SQS queue is allowed as an Amazon S3 event notification destination, whereas FIFO SQS queue is not allowed

The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications.

Amazon S3 supports the following destinations where it can publish events:

Amazon Simple Notification Service (Amazon SNS) topic

Amazon Simple Queue Service (Amazon SQS) queue

AWS Lambda

Currently, the Standard SQS queue is only allowed as an Amazon S3 event notification destination, whereas the FIFO SQS queue is not allowed.

Incorrect options:

Both Standard SQS queue and FIFO SQS queue are allowed as an Amazon S3 event notification destination

Neither Standard SQS queue nor FIFO SQS queue is allowed as an Amazon S3 event notification destination

Only FIFO SQS queue is allowed as an Amazon S3 event notification destination, whereas Standard SQS queue is not allowed

These three options contradict the details provided in the explanation above. To summarize, the Standard SQS queue is only allowed as an Amazon S3 event notification destination, whereas the FIFO SQS queue is not allowed. Hence these three options are incorrect.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html