1 / 8

Amazon SAP-C01 Dumps Question Answers ~ Secret of Success| Amazondumps.com

It has become tough to get your desired job in this modest world. Request for skill is going high and high. The same is the case with the IT job as it is among the fields advancing day by day. It has become challenging to be IT qualified because the competition is getting tough. To help you cope with the rapidly growing IT industry, our highly qualified staff has arranged the best SAP-C01 Dumps Question Answers for you. From us, you can get the best material for your desired IT certificate. This exam study guide consists of simple but important questions and answers which will help you get your tests passed in very first attempt. You can have all the information you need about your IT guarantee from us. The study material is also available in AWS Certified Solutions Architect - Professional Online Test Engine and a free demo version is also available. You can also give free practice tests. So, with such an unbelievable opportunity, what are you waiting for? Visit our Amazondumps.com and get your desired AWS Certified Professional Exam.

Download Presentation

Amazon SAP-C01 Dumps Question Answers ~ Secret of Success| Amazondumps.com

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Amazon SAP-C01 Dumps AWS Certified Solutions Architect - Professional

  2. Question #:1 A company has an application that generates a weather forecast that is updated every 15 minutes with an output resolution of 1 billion unique positions, each approximately 20 bytes in size (20 Gigabytes per forecast). Every hour, the forecast data is globally accessed approximately 5 million times (1,400 requests per second), and up to 10 times more during weather events. The forecast data is overwritten every update. Users of the current weather forecast application expect responses to queries to be returned in less than two seconds for each request. Which design meets the required request rate and response time? A. Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an Amazon API Gateway endpoint with AWS Lambda functions responding to queries as the origin. Enable API caching on the API Gateway stage with a cache-control timeout set for 15 minutes. B. Store forecast locations in an Amazon EFS volume. Create an Amazon CloudFront distribution that targets an Elastic Load Balancing group of an Auto Scaling fleet of Amazon EC2 instances that have mounted the Amazon EFS volume. Set the set cache-control timeout for 15 minutes in the CloudFront distribution. C. Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an API Gateway endpoint with AWS Lambda functions responding to queries as the origin. Create an Amazon Lambda@Edge function that caches the data locally at edge locations for 15 minutes. D. Store forecast locations in an Amazon S3 as individual objects. Create an Amazon CloudFront distribution targeting an Elastic Load Balancing group of an Auto Scaling fleet of EC2 instances, querying the origin of the S3 object. Set the cache-control timeout for 15 minutes in the CloudFront distribution. Answer: A Explanation https://aws.amazon.com/blogs/networking-and-content-delivery/lambdaedge-design-best-practices/ Question #:2 A financial company is using a high-performance compute cluster running on Amazon EC2 instances to perform market simulations A DNS record must be created in an Amazon Route 53 private hosted zone when instances start The DNS record must be removed after instances are terminated. Currently the company uses a combination of Amazon CtoudWatch Events and AWS Lambda to create the DNS record. The solution worked well in testing with small clusters, but in production with clusters containing thousands of instances the company sees the following error in the Lambda logs: HTTP 400 error (Bad request). The response header also includes a status code element with a value of "Throttling" and a status message element with a value of "Rate exceeded " 1 of 7

  3. Which combination of steps should the Solutions Architect take to resolve these issues? (Select THREE) A. Configure an Amazon SOS FIFO queue and configure a CloudWatch Events rule to use this queue as a target. Remove the Lambda target from the CloudWatch Events rule B. Configure an Amazon Kinesis data stream and configure a CloudWatch Events rule to use this queue as a target Remove the Lambda target from the CloudWatch Events rule C. Update the CloudWatch Events rule to trigger on Amazon EC2 "Instance Launch Successful" and "Instance Terminate Successful" events for the Auto Scaling group used by the cluster D. Configure a Lambda function to retrieve messages from an Amazon SQS queue Modify the Lambda function to retrieve a maximum of 10 messages then batch the messages by Amazon Route 53 API call type and submit Delete the messages from the SQS queue after successful API calls. E. Configure an Amazon SQS standard queue and configure the existing CloudWatch Events rule to use this queue as a target Remove the Lambda target from the CloudWatch Events rule. F. Configure a Lambda function to read data from the Amazon Kinesis data stream and configure the batch window to 5 minutes Modify the function to make a single API call to Amazon Route 53 with all records read from the kinesis data stream Answer: B E F Question #:3 A company hosts a blog post application on AWS using Amazon API Gateway. Amazon DynampDB, and AWS Lambda. The application currently does not use API keys to authorize requests. The API model is as follows: GET/posts/[postid] to get post details GET/users[userid] to get user details GET /comments/[commentid] to get comments details The company has noticed are actively discussing topics in the comments section, and the company wants to increase use engagement by marking the comments appears in real time. Which design should be used to reduce comment latency and improve user experience? A. Use adge-optimized API with Amazon CloudFront to cache API responses. B. Modify the blog application code to request GET comment {commented} every 10 seconds. C. Use AWS AppSync and leverage WebSockts to deliver comments D. Change the concurrency limit of the Lambda functions to lower the API response time. 2 of 7

  4. Answer: D Question #:4 A solutions architect has an operational workload deployed on Amazon EC2 instances in an Auto Scaling group The VPC architecture spans two Availability Zones (AZ) with a subnet in each that the Auto Scaling group is targeting The VPC is connected to an on-premises environment and connectivity cannot be interrupted The maximum size of the Auto Scaling group is 20 instances in service The VPC IPv4 addressing is as follows: VPC CIDR 10 0 0 0/23 AZ1 subnet CIDR 10 0 0 0/24 AZ2 subnet CIDR 10 0 10/24 Since deployment a third AZ has become available in the Region The solutions architect wants to adopt the new AZ without adding additional IPv4 address space and without service downtime Which solution will meet these requirements? A. Update the Auto Scaling group to use the AZ2 subnet only Delete and re-create the AZ1 subnet using half the previous address space Adjust the Auto Scaling group to also use the new AZ1 subnet When the instances are healthy adjust the Auto Scaling group to use the AZ1 subnet only Remove the current AZ2 subnet Create a new AZ2 subnet using the second half of the address space from the original. AZ1 subnet Create a new AZ3 subnet using halt the original AZ2 subnet address space then update the Auto Scaling group to target all three new subnets B. Terminate the EC2 instances m the AZ1 subnet Delete and re-create the AZ1 subnet using half the address space Update the Auto Scaling group to use this new subnet Repeat this for the second AZ Define a new subnet in AZ3 then update the Auto Scaling group to target all three new subnets C. Create a new VPC with the same IPv4 address space and define three subnets with one for each AZ Update the existing Auto Scaling group to target the new subnets in the new VPC D. Update the Auto Scaling group to use the AZ2 subnet only Update the AZ1 subnet to have half the previous address space Adjust the Auto Scaling group to also use the AZ1 subnet again When the instances are healthy, adjust the Auto Scaling group to use the AZ1 subnet only Update the current A22 subnet and assign the second half of the address space from the original AZ1 subnet Create a new AZ3 subnet using half the original AZ2 subnet address space then update the Auto Scaling group to target all three new subnets Answer: A Question #:5 A company is migrating an application to AWS. It wants to use fully managed services as much as possible 3 of 7

  5. during the migration. The company needs to store large, important documents within the application with the following requirements: The data must be highly durable and available. The data must always be encrypted at rest and in transit. The encryption key must be managed by the company and rotated periodically. Which of the following solutions should the Solutions Architect recommend? A. Deploy the storage gateway to AWS in file gateway mode. Use Amazon EBS volume encryption using an AWS KMS key to encrypt the storage gateway volumes. B. Use Amazon S3 with a bucket policy to enforce HTTPS for connections to the bucket and to enforce server-side encryption and AWS KMS for object encryption. C. Use Amazon DynamoDB with SSL to connect to DynamoDB. Use an AWS KMS key to encrypt DynamoDB objects at rest. D. Deploy instances with Amazon EBS volumes attached to store this data. Use EBS volume encryption using an AWS KMS key to encrypt the data. Answer: B Explanation https://aws.amazon.com/blogs/security/how-to-use-bucket-policies-and-apply-defense-in-depth-to-help-secure-your-amazon-s3-data/ Question #:6 A company is running multiple applications on Amazon EC2. Each application is deployed and managed by multiple business units. All applications are deployed on a single AWS account but on different virtual private clouds (VPCs). The company uses a separate VPC in the same account for test and development purposes. Production applications suffered multiple outages when users accidentally terminated and modified resources that belonged to another business unit. A Solutions Architect has been asked to improve the availability of the company applications while allowing the Developers access to the resources they need. Which option meets the requirements with the LEAST disruption? A. Create an AWS account for each business unit. Move each business unit’s instances to its own account and set up a federation to allow users to access their business unit’s account. B. Set up a federation to allow users to use their corporate credentials, and lock the users down to their own VPC. Use a network ACL to block each VPC from accessing other VPCs. C. Implement a tagging policy based on business units. Create an IAM policy so that each user can terminate instances belonging to their own business units only. D. 4 of 7

  6. D. Set up role-based access for each user and provide limited permissions based on individual roles and the services for which each user is responsible. Answer: C Explanation https://aws.amazon.com/blogs/security/resource-level-permissions-for-ec2-controlling-management-access-on-specific-instances/ Question #:7 A solutions architect needs to define a reference architecture for a solution for three-tier applications with web, application, and NoSQL data layers. The reference architecture must meet the following requirements: • High availability within an AWS Region • Able to fail over in 1 minute to another AWS Region for disaster recovery • Provide the most efficient solution while minimizing the impact on the user experience Which combination of steps will meet these requirements? (Select THREE.) A. Use an Amazon Route 53 weighted routing policy set to 100/0 across the two selected Regions. Set Time to Live (TTL) to 1 hour. B. Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL) to 30 seconds. C. Use a global table within Amazon DynamoDB so data can be accessed in the two selected Regions. D. Back up data from an Amazon DynamoDB table in the primary Region every 60 minutes and then write the data to Amazon S3. Use S3 cross-Region replication to copy the data from the primary Region to the disaster recovery Region. E. Have a script import the data into DynamoDB in a disaster recovery scenario. Implement a hot standby model using Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources. F. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources. Answer: A D E Question #:8 An enterprise company is building an infrastructure services platform for its users. The company has the 5 of 7

  7. following requirements: • Provide least privilege access to users when launching AWS infrastructure so users cannot provision unapproved services • Use a central account to manage the creation of infrastructure services • Provide the ability to distribute infrastructure services to multiple accounts in AWS Organizations • Provide the ability to enforce tags on any infrastructure that is started by users Which combination of actions using AWS services will meet these requirements? (Select THREE.) A. Develop infrastructure services using AWS Cloud Formation templates Add the templates to a central Amazon S3 bucket and add the-IAM rotes or users that require access to the S3 bucket policy B. Develop infrastructure services using AWS Cloud For matron templates Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account Share these portfolios with the Organizations structure created for the company C. Allow user IAM roles to have AWSCIoudFormationFullAccess and AmazonS3ReadOnlyAccess permissions Add an Organizations SCP at the AWS account root user level to deny all services except AWS CloudFormation and Amazon S3. D. Allow user IAM roles to have ServiceCatalogEndUserAccess permissions only Use an automation script to import the central portfolios to local AWS accounts, copy the TagOption assign users access and apply launch constraints E. Use the AWS Service Catalog TagOption Library to maintain a list of tags required by the company Apply the TagOption to AWS Service Catalog products or portfolios F. Use the AWS CloudFormation Resource Tags property to enforce the application of tags to any CloudFormation templates that will be created for users Answer: B D F Question #:9 A company is using AWS Organizations to manage multiple AWS accounts. For security purposes, the company requires the creation of an Amazon Simple Notification Service (Amazon SNS) topic that enables integration with a third-party alerting system in all the Organizations member accounts. A solutions architect used an AWS CloudFormation template to create the SNS topic and stack sets to automate the deployment of CloudFormation attacks. Trusted access has been enabled in Organizations. What should the solutions architect do to deploy the CloudFormation StackSets in all AWS accounts? A. Create a stack set in the Organizations member accounts. Use service-managed permissions. Set deployment options to deploy to an organization. Use CloudFormation StackSets drift detection. 6 of 7

  8. B. Create stacks in the Organizations member accounts. Use self-service permissions. Set deployment options to deploy to an organization. Enable the CloudFormation StackSets automatic deployment. C. Create stacks in the Organizations master account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets automatic deployment. D. Create stacks in the Organization master account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets drift detection. Answer: C Question #:10 A company uses AWS Organizations to manage more than 1,000 AWS accounts. The company has creates a new developer organization. There are 540 developer member in that must be moved to the new developer organization. All accounts are set up with all the required information so that each account can be operated as a standalone account. Which combination of steps should a solutions architect take to move all of the developer accounts to the new developer organization? (Select THREE) A. Call the MoveAccount operation in the Organizations API from the old organization’s master account to migrate the developer accounts to the new developer organization B. From the master account, remove each developer account from the old organization using the RemoveAccountFromOrganization operation in the Organizations API C. From each developer account, remove the account from the old organization using the RemoveAccountFromOrganization operation In the Organization API. D. Sign in to the new developer organization's master account and create a placeholder member account that acts as a target for the developer account migration. E. Call the InviteAccountToOrganization operation in the Organizations API from the new developer organization's master account to send invitations to the developer accounts. F. Have each developer sign in to their account and confirm to join the new developer organization. Answer: B D E 7 of 7

More Related