Refer to the exhibit.

A. Option A
B. Option B
C. Option C
D. Option D
Explanation:
Correct Answer: Allow System APIs to return data that is NOT currently required by the
identified Process or Experience APIs.

A retail company is using an Order API to accept new orders. The Order API uses a JMS
queue to submit orders to a backend order management service. The normal load for
orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore.
The CPU load of each CloudHub worker normally runs well below 70%. However, several
times during the year the Order API gets four times (4x) the average number of orders.
This causes the CloudHub worker CPU load to exceed 90% and the order submission time
to exceed 30 seconds. The cause, however, is NOT the backend order management
service, which still responds fast enough to meet the response SLA for the Order API.
What is the MOST resource-efficient way to configure the Mule application's CloudHub
deployment to help the company cope with this performance challenge?
A.
Permanently increase the size of each of the two (2) CloudHub workers by at least four
times (4x) to one (1) vCore
B.
Use a vertical CloudHub autoscaling policy that triggers on CPU utilization greater than
70%
C.
Permanently increase the number of CloudHub workers by four times (4x) to eight (8)
CloudHub workers
D.
Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater
than 70%
Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater
than 70%
Explanation: Explanation
Correct Answer: Use a horizontal CloudHub autoscaling policy that triggers on CPU
utilization greater than 70%
*****************************************
The scenario in the question is very clearly stating that the usual traffic in the year is pretty
well handled by the existing worker configuration with CPU running well below 70%. The
problem occurs only "sometimes" occasionally when there is spike in the number of orders
coming in.
So, based on above, We neither need to permanently increase the size of each worker nor
need to permanently increase the number of workers. This is unnecessary as other than
those "occasional" times the resources are idle and wasted.
We have two options left now. Either to use horizontal Cloudhub autoscaling policy to
automatically increase the number of workers or to use vertical Cloudhub autoscaling
policy to automatically increase the vCore size of each worker.
Here, we need to take two things into consideration:
1. CPU
2. Order Submission Rate to JMS Queue
>> From CPU perspective, both the options (horizontal and vertical scaling) solves the
issue. Both helps to bring down the usage below 90%.
>> However, If we go with Vertical Scaling, then from Order Submission Rate perspective,
as the application is still being load balanced with two workers only, there may not be much
improvement in the incoming request processing rate and order submission rate to JMS
queue. The throughput would be same as before. Only CPU utilization comes down.
>> But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to
increase the throughput as more workers are being load balanced now. This way we can
address both CPU and Order Submission rate.
Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.
The implementation of a Process API must change.What is a valid approach that minimizes the impact of this change on API clients?
A.
Update the RAML definition of the current Process API and notify API client developers
by sending them links to the updated RAML definition
B.
Postpone changes until API consumers acknowledge they are ready to migrate to a new
Process API or API version
C.
Implement required changes to the Process API implementation so that whenever
possible, the Process API's RAML definition remains unchanged
D.
Implement the Process API changes in a new API implementation, and have the old API
implementation return an HTTP status code 301 - Moved Permanently to inform API clients
they should be calling the new API implementation
Implement required changes to the Process API implementation so that whenever
possible, the Process API's RAML definition remains unchanged
Explanation: Explanation
Correct Answer: Implement required changes to the Process API implementation so that,
whenever possible, the Process API’s RAML definition remains unchanged.
*****************************************
Key requirement in the question is:
>> Approach that minimizes the impact of this change on API clients
Based on above:
>> Updating the RAML definition would possibly impact the API clients if the changes
require any thing mandatory from client side. So, one should try to avoid doing that until
really necessary.
>> Implementing the changes as a completely different API and then redirectly the clients
with 3xx status code is really upsetting design and heavily impacts the API clients.
>> Organisations and IT cannot simply postpone the changes required until all API
consumers acknowledge they are ready to migrate to a new Process API or API version.
This is unrealistic and not possible.
The best way to handle the changes always is to implement required changes to the API
implementations so that, whenever possible, the API’s RAML definition remains
unchanged.
An organization makes a strategic decision to move towards an IT operating model that emphasizes consumption of reusable IT assets using modern APIs (as defined by MuleSoft). What best describes each modern API in relation to this new IT operating model?
A.
Each modern API has its own software development lifecycle, which reduces the need for documentation and automation
B.
Each modem API must be treated like a product and designed for a particular target audience (for instance, mobile app developers)
C.
Each modern API must be easy to consume, so should avoid complex authentication mechanisms such as SAML or JWT D
D.
Each modern API must be REST and HTTP based
Each modem API must be treated like a product and designed for a particular target audience (for instance, mobile app developers)
Explanation: Explanation
Correct Answers:
1. Each modern API must be treated like a product and designed for a particular target
audience (for instance mobile app developers)
*****************************************
A new upstream API Is being designed to offer an SLA of 500 ms median and 800 ms
maximum (99th percentile) response time. The corresponding API implementation needs to
sequentially invoke 3 downstream APIs of very similar complexity.
The first of these downstream APIs offers the following SLA for its response time: median:
100 ms, 80th percentile: 500 ms, 95th percentile: 1000 ms.
If possible, how can a timeout be set in the upstream API for the invocation of the first
downstream API to meet the new upstream API's desired SLA?
A.
Set a timeout of 50 ms; this times out more invocations of that API but gives additional
room for retries
B.
Set a timeout of 100 ms; that leaves 400 ms for the other two downstream APIs to complete
C.
No timeout is possible to meet the upstream API's desired SLA; a different SLA must be
negotiated with the first downstream API or invoke an alternative API
D.
Do not set a timeout; the Invocation of this API Is mandatory and so we must wait until it
responds
Set a timeout of 100 ms; that leaves 400 ms for the other two downstream APIs to complete
Explanation:
Explanation
Correct Answer: Set a timeout of 100ms; that leaves 400ms for other two downstream APIs
to complete
*****************************************
Key details to take from the given scenario:
>> Upstream API's designed SLA is 500ms (median). Lets ignore maximum SLA response
times.
>> This API calls 3 downstream APIs sequentially and all these are of similar complexity.
>> The first downstream API is offering median SLA of 100ms, 80th percentile: 500ms;
95th percentile: 1000ms.
Based on the above details:
>> We can rule out the option which is suggesting to set 50ms timeout. Because, if the
median SLA itself being offered is 100ms then most of the calls are going to timeout and
time gets wasted in retried them and eventually gets exhausted with all retries. Even if
some retries gets successful, the remaining time wont leave enough room for 2nd and 3rd
downstream APIs to respond within time.
>> The option suggesting to NOT set a timeout as the invocation of this API is mandatory
and so we must wait until it responds is silly. As not setting time out would go against the
good implementation pattern and moreover if the first API is not responding within its
offered median SLA 100ms then most probably it would either respond in 500ms (80th
percentile) or 1000ms (95th percentile). In BOTH cases, getting a successful response
from 1st downstream API does NO GOOD because already by this time the Upstream API
SLA of 500 ms is breached. There is no time left to call 2nd and 3rd downstream APIs.
>> It is NOT true that no timeout is possible to meet the upstream APIs desired SLA.
As 1st downstream API is offering its median SLA of 100ms, it means MOST of the time we
would get the responses within that time. So, setting a timeout of 100ms would be ideal for
MOST calls as it leaves enough room of 400ms for remaining 2 downstream API calls.
A client has several applications running on the Salesforce service cloud. The business requirement for integration is to get daily data changes from Account and Case Objects. Data needs to be moved to the client's private cloud AWS DynamoDB instance as a single JSON and the business foresees only wanting five attributes from the Account object, which has 219 attributes (some custom) and eight attributes from the Case Object. What design should be used to support the API/ Application data model?
A. Create separate entities for Account and Case Objects by mimicking all the attributes in SAPI, which are combined by the PAPI and filtered to provide JSON output containing 13 attributes.
B. Request client’s AWS project team to replicate all the attributes and create Account and Case JSON table in DynamoDB. Then create separate entities for Account and Case Objects by mimicking all the attributes in SAPI to transfer ISON data to DynamoD for respective Objects
C. Start implementing an Enterprise Data Model by defining enterprise Account and Case Objects and implement SAPI and DynamoDB tables based on the Enterprise Data Model,
D. Create separate entities for Account with five attributes and Case with eight attributes in SAPI, which are combined by the PAPI to provide JSON output containing 13 attributes.
What are 4 important Platform Capabilities offered by Anypoint Platform?
A.
API Versioning, API Runtime Execution and Hosting, API Invocation, API Consumer Engagement
B.
API Design and Development, API Runtime Execution and Hosting, API Versioning, API
Deprecation
C.
API Design and Development, API Runtime Execution and Hosting, API Operations and
Management, API Consumer Engagement
D.
API Design and Development, API Deprecation, API Versioning, API Consumer
Engagement
API Design and Development, API Runtime Execution and Hosting, API Operations and
Management, API Consumer Engagement
Explanation: Explanation
Correct Answer: API Design and Development, API Runtime Execution and Hosting, API
Operations and Management, API Consumer Engagement
*****************************************
>> API Design and Development - Anypoint Studio, Anypoint Design Center, Anypoint
Connectors
>> API Runtime Execution and Hosting - Mule Runtimes, CloudHub, Runtime Services
>> API Operations and Management - Anypoint API Manager, Anypoint Exchange
>> API Consumer Management - API Contracts, Public Portals, Anypoint Exchange, API
Notebooks
Which out-of-the-box key performance indicator measures the success of a typical Center for Enablement and is immediately available in responses from Anypoint Platform APIs?
A. Per business group, the ratio of the number of production APT implementations deployed using a C1/CD pipeline to the number of production API implementations deployed manually
B. Per deployed API implementation, the amount of bandwidth consumed each day
C. Per published API, the number of developers that downloaded s version of the API specification
D. Per published API, the number of consumers that requested access to the API and have been approved in the Production environment
| Page 1 out of 19 Pages |