Amazon SAP-C02 Questions & Answers

Full Version: 103 Q&A



SAP-C02 Dumps
SAP-C02 Braindumps
SAP-C02 Real Questions
SAP-C02 Practice Test
SAP-C02 Actual Questions


Amazon
SAP-C02
AWS Certified Solutions Architect - Professional
https://killexams.com/pass4sure/exam-detail/SAP-C02

Question: 89
A company wants to use a third-party software-as-a-service (SaaS) application. The third-party SaaS application is
consumed through several API calls. The third-party SaaS application also runs on AWS inside a VPC.
The company will consume the third-party SaaS application from inside a VPC. The company has internal security
policies that mandate the use of private connectivity that does not traverse the internet. No resources that run in the
company VPC are allowed to be accessed from outside the company’s VPC. All permissions must conform to the
principles of least privilege.
Which solution meets these requirements?
A. Create an AWS PrivateLink interface VPC endpoint. Connect this endpoint to the endpoint service that the third-
party SaaS application provides. Create a security group to limit the access to the endpoint. Associate the security group
with the endpoint.
B. Create an AWS Site-to-Site VPN connection between the third-party SaaS application and the company VPC.
Configure network ACLs to limit access across the VPN tunnels.
C. Create a VPC peering connection between the third-party SaaS application and the company VPUpdate route tables
by adding the needed routes for the peering connection.
D. Create an AWS PrivateLink endpoint service. Ask the third-party SaaS provider to create an interface VPC endpoint
for this endpoint service. Grant permissions for the endpoint service to the specific account of the third-party SaaS
provider.
Answer: A
Explanation:
Reference architecture – https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-access-saas.html
Note from documentation that Interface Endpoint is at client side
Question: 90
A company maintains a restaurant review website. The website is a single-page application where files are stored in
Amazon S3 and delivered using Amazon CloudFront. The company receives several fake postings every day that are
manually removed.
The security team has identified that most of the fake posts are from bots with IP addresses that have a bad reputation
within the same global region. The team needs to create a solution to help restrict the bots from accessing the website.
Which strategy should a solutions architect use?
A. Use AWS Firewall Manager to control the CloudFront distribution security settings.
Create a geographical block rule and associate it with Firewall Manager.
B. Associate an AWS WAF web ACL with the CloudFront distribution. Select the managed Amazon IP reputation rule
group for the web ACL with a deny action.
C. Use AWS Firewall Manager to control the CloudFront distribution security settings. Select the managed Amazon IP
reputation rule group and associate it with Firewall Manager with a deny action.
D. Associate an AWS WAF web ACL with the CloudFront distribution. Create a rule group for the web ACL with a
geographical match statement with a deny action.
Answer: B
Explanation:
IP reputation rule groups allow you to block requests based on their source. Choose one or more of these rule groups if
you want to reduce your exposure to BOTS!!!! traffic or exploitation attempts
The Amazon IP reputation list rule group contains rules that are based on Amazon internal threat intelligence. This is
useful if you would like to block IP addresses typically associated with bots or other threats. Inspects for a list of IP
addresses that have been identified as bots by Amazon threat intelligence.
Question: 91
A solution architect is designing an AWS account structure for a company that consists of multiple terms. All the team
will work in the same AWS Region. The company needs a VPC that is connected to the on-premises network. The
company expects less than 50 Mbps of total to and from the on-premises network.
Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO)
A. Create an AWS CloudFormation template that provisions a VPC and the required subnets. Deploy the template to
each AWS account
B. Create an AWS CloudFormabon template that provisions a VPC and the required subnets. Deploy the template to a
shared services account. Share the subnets by using AWS Resource Access Manager
C. Use AWS Transit Gateway along with an AWS Site-to-Site VPN for connectivity to the on-premises network. Share
the transit gateway by using AWS Resource Access Manager
D. Use AWS Site-to-Site VPN for connectivity to the on-premises network
E. Use AWS Direct Connect for connectivity to the on-premises network.
Answer: B,D
Question: 92
A company is running a containerized application in the AWS Cloud. The application is running by using Amazon
Elastic Container Service (Amazon ECS) on a set Amazon EC2 instances. The EC2 instances run in an Auto Scaling
group.
The company uses Amazon Elastic Container Registry (Amazon ECRJ to store its container images When a new image
version is uploaded, the new image version receives a unique tag
The company needs a solution that inspects new image versions for common vulnerabilities and exposures. The solution
must automatically delete new image tags that have Critical or High severity findings. The solution also must notify the
development team when such a deletion occurs
Which solution meets these requirements?
A. Configure scan on push on the repository. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke an
AWS Step Functions state machine when a scan is complete for images that have Critical or High severity findings Use
the Step Functions state machine to delete the image tag for those images and to notify the development team through
Amazon Simple Notification Service (Amazon SNS)
B. Configure scan on push on the repository Configure scan results to be pushed to an Amazon Simple Queue Service
(Amazon SQS) queue Invoke an AWS Lambda function when a new message is added to the SOS queue Use the
Lambda function to delete the image tag for images that have Critical or High seventy findings. Notify the development
team by using Amazon Simple Email Service (Amazon SES).
C. Schedule an AWS Lambda function to start a manual image scan every hour Configure Amazon EventBridge
(Amazon CloudWatch Events) to invoke another Lambda function when a scan is complete. Use the second Lambda
function to delete the image tag for images that have Cnocal or High severity findings. Notify the development team by
using Amazon Simple Notification Service (Amazon SNS)
D. Configure periodic image scan on the repository Configure scan results to be added to an Amazon Simple Queue
Service (Amazon SQS) queue Invoke an AWS Step Functions state machine when a new message is added to the SQS
queue Use the Step Functions state machine to delete the image tag for images that have Critical or High severity
findings. Notify the development team by using Amazon Simple Email Service (Amazon SES).
Answer: C
Question: 93
A North American company with headquarters on the East Coast is deploying a new web application running on
Amazon EC2 in the us-east-1 Region. The application should dynamically scale to meet user demand and maintain
resiliency. Additionally, the application must have disaster recovery capabilities in an active-passive configuration with
the us-west-1 Region.
Which steps should a solutions architect take after creating a VPC in the us-east-1 Region?
A. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect both VPCs. Deploy an Application
Load Balancer (ALB) spanning multiple Availability Zones (AZs) to the VPC in the us-east-1 Region. Deploy EC2
instances across multiple AZs in each Region as part of an Auto Scaling group spanning both VPCs and served by the
ALB.
B. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs) to the VPC in the us-east-1
Region. Deploy EC2 instances across multiple AZs as part of an Auto Scaling group served by the ALB. Deploy the
same solution to the us-west-1 Region Create an Amazon Route 53 record set with a failover routing policy and health
checks enabled to provide high availability across both Regions.
C. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect both VPCs Deploy an Application
Load Balancer (ALB) that spans both VPCs Deploy EC2 instances across multiple Availability Zones as part of an Auto
Scaling group in each VPC served by the ALB. Create an Amazon Route 53 record that points to the ALB.
D. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs) to the VPC in the us-east-1
Region. Deploy EC2 instances across multiple AZs as part of an Auto Scaling group served by the ALB. Deploy the
same solution to the us-west-1 Region. Create separate Amazon Route 53 records in each Region that point to the ALB
in the Region. Use Route 53 health checks to provide high availability across both Regions.
Answer: B
Explanation:
A new web application in a active-passive DR mode. a Route 53 record set with a failover routing policy.
Question: 94
A company is launching a new web application on Amazon EC2 instances. Development and production workloads
exist in separate AWS accounts.
According to the company’s security requirements, only automated configuration tools are allowed to access the
production account. The company’s security team wants to receive immediate notification if any manual access to the
production AWS account or EC2 instances occurs
Which combination of actions should a solutions architect take in the production account to meet these requirements?
(Select THREE.)
A. Turn on AWS CloudTrail logs in the application’s primary AWS Region Use Amazon Athena to queiy the logs for
AwsConsoleSignln events.
B. Configure Amazon Simple Email Service (Amazon SES) to send email to the security team when an alarm is
activated.
C. Deploy EC2 instances in an Auto Scaling group Configure the launch template to deploy instances without key pairs
Configure Amazon CloudWatch Logs to capture system access logs Create an Amazon CloudWatch alarm that is based
on the logs to detect when a user logs in to an EC2 instance
D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to send a message to the security team when
an alarm is activated
E. Turn on AWS CloudTrail logs for all AWS Regions. Configure Amazon CloudWatch alarms to provide an alert when
an AwsConsoleSignin event is detected.
F. Deploy EC2 instances in an Auto Scaling group. Configure the launch template to delete the key pair after launch.
Configure Amazon CloudWatch Logs for the system access logs Create an Amazon CloudWatch dashboard to show user
logins over time.
Answer: C,D,E
Question: 95
A solutions architect is designing an application to accept timesheet entries from employees on their mobile devices.
Timesheets will be submitted weekly, with most of the submissions occurring on Friday. The data must be stored in a
format that allows payroll administrators to run monthly reports. The infrastructure must be highly available and scale to
match the rate of incoming data and reporting requests.
Which combination of steps meets these requirements while minimizing operational overhead? (Select TWO.)
A. Deploy the application to Amazon EC2 On-Demand Instances With load balancing across multiple Availability
Zones. Use scheduled Amazon EC2 Auto Scaling to add capacity before the high volume of submissions on Fridays.
B. Deploy the application in a container using Amazon Elastic Container Service (Amazon ECS) with load balancing
across multiple Availability Zones. Use scheduled Service Auto Scaling to add capacity before the high volume of
submissions on Fridays.
C. Deploy the application front end to an Amazon S3 bucket served by Amazon CloudFront. Deploy the application
backend using Amazon API Gateway with an AWS Lambda proxy integration.
D. Store the timesheet submission data in Amazon Redshift. Use Amazon OuickSight to generate the reports using
Amazon Redshift as the data source.
E. Store the timesheet submission data in Amazon S3. Use Amazon Athena and Amazon OuickSight to generate the
reports using Amazon S3 as the data source.
Answer: A,E
Question: 96
A company needs to implement a patching process for its servers. The on-premises servers and Amazon EC2 instances
use a variety of tools to perform patching.
Management requires a single report showing the patch status of all the servers and instances.
Which set of actions should a solutions architect take to meet these requirements?
A. Use AWS Systems Manager to manage patches on the on-premises servers and EC2 instances. Use Systems Manager
to generate patch compliance reports.
B. Use AWS OpsWorks to manage patches on the on-premises servers and EC2 instances. Use Amazon OuickSight
integration with OpsWorks to generate patch compliance reports.
C. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to apply patches by scheduling an AWS Systems
Manager patch remediation job. Use Amazon Inspector to generate patch compliance reports.
D. Use AWS OpsWorks to manage patches on the on-premises servers and EC2 instances. Use AWS X-Ray to post the
patch status to AWS Systems Manager OpsCenter to generate patch compliance reports.
Answer: A
Explanation:
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html
Question: 97
A company has a photo sharing social networking application. To provide a consistent experience for users, the
company performs some image processing on the photos uploaded by users before publishing on the application. The
image processing is implemented using a set of Python libraries.
The current architecture is as follows:
• The image processing Python code runs in a single Amazon EC2 instance and stores the processed images in an
Amazon S3 bucket named ImageBucket.
• The front-end application, hosted in another bucket, loads the images from ImageBucket to display to users.
With plans for global expansion, the company wants to implement changes in its existing architecture to be able to scale
for increased demand on the application and reduce management complexity as the application scales.
Which combination of changes should a solutions architect make? (Select TWO.)
A. Place the image processing EC2 instance into an Auto Scaling group.
B. Use AWS Lambda to run the image processing tasks.
C. Use Amazon Rekognition for image processing.
D. Use Amazon CloudFront in front of ImageBucket.
E. Deploy the applications in an Amazon ECS cluster and apply Service Auto Scaling.
Answer: B,D
Explanation:
https://prismatic.io/blog/why-we-moved-from-lambda-to-ecs/
Question: 98
A company runs an e-commerce platform with front-end and e-commerce tiers. Both tiers run on LAMP stacks with the
front-end instances running behind a load balancing appliance that has a virtual offering on AWS Current*/, the
operations team uses SSH to log in to the instances to maintain patches and address other concerns.
The platform has recently been the target of multiple attacks, including.
• A DDoS attack.
• An SOL injection attack
• Several successful dictionary attacks on SSH accounts on the web servers
The company wants to improve the security of the e-commerce platform by migrating to AWS.
The company’s solutions architects have decided to use the following approach;
• Code review the existing application and fix any SQL injection issues.
• Migrate the web application to AWS and leverage the latest AWS Linux AMI to address initial security patching.
• Install AWS Systems Manager to manage patching and allow the system administrators to run commands on all
instances, as needed.
What additional steps will address all of the identified attack types while providing high availability and minimizing
risk?
A. Enable SSH access to the Amazon EC2 instances using a security group that limits access to specific IPs. Migrate on-
premises MySQL to Amazon RDS Multi-AZ Install the third-party load balancer from the AWS Marketplace and
migrate the existing rules to the load balancer’s AWS instances Enable AWS Shield Standard for DDoS protection
B. Disable SSH access to the Amazon EC2 instances. Migrate on-premises MySQL to Amazon RDS Multi-AZ
Leverage an Elastic Load Balancer to spread the load and enable AWS Shield Advanced for protection. Add an Amazon
CloudFront distribution in front of the website Enable AWS WAF on the distribution to manage the rules.
C. Enable SSH access to the Amazon EC2 instances through a bastion host secured by limiting access to specific IP
addresses. Migrate on-premises MySQL to a self-managed EC2 instance. Leverage an AWS Elastic Load Balancer to
spread the load, and enable AWS Shield Standard for DDoS protection Add an Amazon CloudFront distribution in front
of the website.
D. Disable SSH access to the EC2 instances. Migrate on-premises MySQL to Amazon RDS Single-AZ. Leverage an
AWS Elastic Load Balancer to spread the load Add an Amazon CloudFront distribution in front of the website Enable
AWS WAF on the distribution to manage the rules.
Answer: B
Question: 99
A company manages an on-premises JavaScript front-end web application. The application is hosted on two servers
secured with a corporate Active Directory. The application calls a set of Java-based microservices on an application
server and stores data in a clustered MySQL database. The application is heavily used during the day on weekdays. It is
lightly used during the evenings and weekends.
Daytime traffic to the application has increased rapidly, and reliability has diminished as a result. The company wants to
migrate the application to AWS with a solution that eliminates the need for server maintenance, with an API to securely
connect to the microservices.
Which combination of actions will meet these requirements? (Select THREE.)
A. Host the web application on Amazon S3. Use Amazon Cognito identity pools (federated identities) with SAML for
authentication and authorization.
B. Host the web application on Amazon EC2 with Auto Scaling. Use Amazon Cognito
federation and Login with Amazon for authentication and authorization.
C. Create an API layer with Amazon API Gateway. Rehost the microservices on AWS Fargate containers.
D. Create an API layer with Amazon API Gateway. Rehost the microservices on Amazon Elastic Container Service
(Amazon ECS) containers.
E. Replatform the database to Amazon RDS for MySQL.
F. Replatform the database to Amazon Aurora MySQL Serverless.
Answer: A,C,E
Question: 100
A financial company is building a system to generate monthly, immutable bank account statements for its users.
Statements are stored in Amazon S3. Users should have immediate access to their monthly statements for up to 2 years.
Some users access their statements frequently, whereas others rarely access their statements. The company’s security and
compliance policy requires that the statements be retained for at least 7 years.
What is the MOST cost-effective solution to meet the company’s needs?
A. Create an S3 bucket with Object Lock disabled. Store statements in S3 Standard. Define an S3 Lifecycle policy to
transition the data to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days. Define another S3 Lifecycle policy
to move the data to S3 Glacier Deep Archive after 2 years. Attach an S3 Glacier Vault Lock policy with deny delete
permissions for archives less than 7 years old.
B. Create an S3 bucket with versioning enabled. Store statements in S3 Intelligent-Tiering. Use same-Region replication
to replicate objects to a backup S3 bucket. Define an S3 Lifecycle policy for the backup S3 bucket to move the data to
S3 Glacier. Attach an S3 Glacier Vault Lock policy with deny delete permissions for archives less than 7 years old.
C. Create an S3 bucket with Object Lock enabled. Store statements in S3 Intelligent-Tiering. Enable compliance mode
with a default retention period of 2 years. Define an S3 Lifecycle policy to move the data to S3 Glacier after 2 years.
Attach an S3 Glacier Vault Lock policy with deny delete permissions for archives less than 7 years old.
D. Create an S3 bucket with versioning disabled. Store statements in S3 One Zone-Infrequent Access (S3 One Zone-IA).
Define an S3 Lifecyde policy to move the data to S3 Glacier Deep Archive after 2 years. Attach an S3 Glader Vault
Lock policy with deny delete permissions for archives less than 7 years old.
Answer: C
Explanation:
https://aws.amazon.com/about-aws/whats-new/2018/11/s3-object-lock/ Create an S3 bucket with Object Lock enabled.
Store statements in S3 Intelligent-Tiering. Enable compliance mode with a default retention period of 2 years. Define an
S3 Lifecycle policy to move the data to S3 Glacier after 2 years. Attach an S3 Glacier Vault Lock policy with deny
delete permissions for archives less than 7 years old. https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-
lock-overview.html
Question: 101
A company runs an application on AWS. An AWS Lambda function uses credentials to authenticate to an Amazon RDS
tor MySQL DB instance. A security risk assessment identified that these credentials are not frequently rotated. Also,
encryption at rest is not enabled for the DB instance. The security team requires that both of these issues be resolved.
Which strategy should a solutions architect recommend to remediate these security risks?
A. Configure the Lambda function to store and retrieve the database credentials in AWS Secrets Manager and enable
rotation of the credentials. Take a snapshot ol the DB instance and encrypt a copy of that snapshot. Replace the DB
instance with a new DB instance that is based on the encrypted snapshot.
B. Enable IAM DB authentication on the DB instance. Grant the Lambda execution role access to the DB instance.
Modify the DB instance and enable encryption.
C. Enable IAM DB authentication on the DB instance. Grant the Lambda execution role access to the DB instance.
Create an encrypted read replica of the DB instance. Promote Ihe encrypted read replica to be the new primary node.
D. Configure the Lambda function to store and retrieve the database credentials as encrypted AWS Systems Manager
Parameter Store parameters. Create another Lambda function to automatically rotate the credentials. Create an encrypted
read replica of the DB instance. Promote the encrypted read replica to be the new primary node.
Answer: A
Explanation:
Parameter store can store DB credentials as secure string but CANNOT rotate secrets, hence, go with A + Cannot enable
encryption on existing MySQL RDS instance, must create a new encrypted one from unencrypted snapshot.
https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-
manager/#:~:text= Secrets%20Manager%20offers%20built%2Din%20integrations%20for%
20rotating%20credentials%20for,rotate%20other%20types%20of%20secrets.
Encrypting a unencrypted instance of DB or creating a encrypted replica of an un encrypted DB instance are not
possible Hence A is the only solution possible.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html#Overview.Encryption.Limitations
Question: 102
A company is planning to set up a REST API application on AWS. The application team wants to set up a new identity
store on AWS. The IT team does not want to maintain any infrastructure or servers for this deployment.
What is the MOST operationally efficient solution that meets these requirements?
A. Deploy the application as AWS Lambda functions. Set up Amazon API Gateway REST API endpoints for the
application Create a Lambda function, and configure a Lambda authorizer
B. Deploy the application in AWS AppSync, and configure AWS Lambda resolvers Set up an Amazon Cognito user
pool, and configure AWS AppSync to use the user pool for authorization
C. Deploy the application as AWS Lambda functions. Set up Amazon API Gateway REST API endpoints for the
application Set up an Amazon Cognito user pool, and configure an Amazon Cognito authorizer
D. Deploy the application in Amazon Elastic Kubemetes Service (Amazon EKS) clusters. Set up an Application Load
Balancer for the EKS pods Set up an Amazon Cognito user pool and service pod for authentication.
Answer: C
Question: 103
A company standardized its method of deploying applications to AWS using AWS CodePipeline and AWS Cloud
Formation. The applications are in Typescript and Python. The company has recently acquired another business that
deploys applications to AWS using Python scripts.
Developers from the newly acquired company are hesitant to move their applications under CloudFormation because it
would require than they learn a new domain-specific language and eliminate their access to language features, such as
looping.
How can the acquired applications quickly be brought up to deployment standards while addressing the developers’
concerns?
A. Create CloudFormation templates and re-use parts of the Python scripts as instance user data. Use the AWS Cloud
Development Kit (AWS CDK) to deploy the application using these templates. Incorporate the AWS CDK into
CodePipeline and deploy the application to AWS using these templates.
B. Use a third-party resource provisioning engine inside AWS CodeBuild to standardize the deployment processes of the
existing and acquired company. Orchestrate the CodeBuild job using CodePipeline.
C. Standardize on AWS OpsWorks. Integrate OpsWorks with CodePipeline. Have the developers create Chef recipes to
deploy their applications on AWS.
D. Define the AWS resources using Typescript or Python. Use the AWS Cloud Development Kit (AWS CDK) to create
CloudFormation templates from the developers’ code, and use the AWS CDK to create CloudFormation stacks.
Incorporate the AWS CDK as a CodeBuild job in CodePipeline.
Answer: D
Explanation:
https://docs.aws.amazon.com/cdk/latest/guide/codepipeline_example.html
By using the AWS CDK, the developers can define the AWS resources using the familiar Typescript or Python
programming languages, rather than learning a new domain-specific language like CloudFormation. The AWS CDK
then generates the CloudFormation templates, allowing the company to standardize on CloudFormation for deployment
while still leveraging the developers’ expertise in Typescript or Python. The AWS CDK can be integrated as a
CodeBuild job in CodePipeline, making it part of the standardized deployment process.

User: Gabriela*****

I passed my sap-c02 exam with 68.25% thanks to Killexams.com question bank. The questions were appropriate, and they keep updating the database with new questions. I highly recommend Killexams.com to anyone who wants to pass their sap-c02 exam.
User: Kostya*****

For years, I have relied on killexams.com for reliable IT exam resources, and the sap-c02 exam was no exception. I passed this exam with the help of their questions and answers and exam simulator. Everything that people say about killexams.com reliability is true. Their customer service is also exceptional, although I have never had any issues that required contacting them.
User: Tasha*****

Before joining killexams.com, I had given the SAP-C02 practice questions once before. However, I did not have fulfillment, even after devoting enough time to my studies. I did not know what I was missing. However, after joining killexams.com.
User: Valik*****

Passing the Amazon SAP-C02 exam was not an easy feat, but with the help of Killexams.com, I achieved an extraordinary score of 89%. Their study materials and resources were helpful in preparing me for the exam, and I am proud of my accomplishment.
User: Leni*****

I recently passed the sap-c02 exam using a guide that proved to be very helpful. The majority of the questions on the exam were covered in this guide, and the answers were accurate. I highly recommend this website for anyone preparing to take the sap-c02 exam.

Features of iPass4sure SAP-C02 Exam

  • Files: PDF / Test Engine
  • Premium Access
  • Online Test Engine
  • Instant download Access
  • Comprehensive Q&A
  • Success Rate
  • Real Questions
  • Updated Regularly
  • Portable Files
  • Unlimited Download
  • 100% Secured
  • Confidentiality: 100%
  • Success Guarantee: 100%
  • Any Hidden Cost: $0.00
  • Auto Recharge: No
  • Updates Intimation: by Email
  • Technical Support: Free
  • PDF Compatibility: Windows, Android, iOS, Linux
  • Test Engine Compatibility: Mac / Windows / Android / iOS / Linux

Premium PDF with 103 Q&A

Get Full Version

All Amazon Exams

Amazon Exams

Certification and Entry Test Exams

Complete exam list