A retail company with thousands of stores has an API to receive data about purchases and
insert it into a single database. Each individual store sends a batch of purchase data to the
API about every 30 minutes. The API implementation uses a database bulk insert
command to submit all the purchase data to a database using a custom JDBC driver
provided by a data analytics solution provider. The API implementation is deployed to a
single CloudHub worker. The JDBC driver processes the data into a set of several
temporary disk files on the CloudHub worker, and then the data is sent to an analytics
engine using a proprietary protocol. This process usually takes less than a few minutes.
Sometimes a request fails. In this case, the logs show a message from the JDBC driver
indicating an out-of-file-space message. When the request is resubmitted, it is successful.
What is the best way to try to resolve this throughput issue?
A.
se a CloudHub autoscaling policy to add CloudHub workers
B.
Use a CloudHub autoscaling policy to increase the size of the CloudHub worker
C.
Increase the size of the CloudHub worker(s)
D.
Increase the number of CloudHub workers
Increase the number of CloudHub workers
Explanation: Explanation
Correct Answer: Increase the size of the CloudHub worker(s)
*****************************************
The key details that we can take out from the given scenario are:
>> API implementation uses a database bulk insert command to submit all the purchase
data to a database
>> JDBC driver processes the data into a set of several temporary disk files on the
CloudHub worker
>> Sometimes a request fails and the logs show a message indicating an out-of-file-space
message
Based on above details:
>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules
based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages
and not due to some given error or disk space issues.
>> Increasing the number of CloudHub workers also does NOT help here because the
reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to
disk-space.
>> Moreover, the API is doing bulk insert to submit the received batch data. Which means,
all data is handled by ONE worker only at a time. So, the disk space issue should be
tackled on "per worker" basis. Having multiple workers does not help as the batch may still
fail on any worker when disk is out of space on that particular worker.
Therefore, the right way to deal this issue and resolve this is to increase the vCore size of
the worker so that a new worker with more disk space will be provisioned.
What is most likely NOT a characteristic of an integration test for a REST API
implementation?
A.
The test needs all source and/or target systems configured and accessible
B.
The test runs immediately after the Mule application has been compiled and packaged
C.
The test is triggered by an external HTTP request
D.
The test prepares a known request payload and validates the response payload
The test runs immediately after the Mule application has been compiled and packaged
Explanation: Explanation
Correct Answer: The test runs immediately after the Mule application has been compiled
and packaged
*****************************************
>> Integration tests are the last layer of tests we need to add to be fully covered.
>> These tests actually run against Mule running with your full configuration in place and are tested from external source as they work in PROD.
>> These tests exercise the application as a whole with actual transports enabled. So,
external systems are affected when these tests run.
So, these tests do NOT run immediately after the Mule application has been compiled and
packaged.
FYI... Unit Tests are the one that run immediately after the Mule application has been
compiled and packaged.
Reference: https://docs.mulesoft.com/mule-runtime/3.9/testing-strategies#integrationtesting
What is a key requirement when using an external Identity Provider for Client Management in Anypoint Platform?
A.
Single sign-on is required to sign in to Anypoint Platform
B.
The application network must include System APIs that interact with the Identity
Provider
C.
To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider
D.
APIs managed by Anypoint Platform must be protected by SAML 2.0 policies
To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider
Explanation: https://www.folkstalk.com/2019/11/mulesoft-integration-and-platform.html
Explanation
Correct Answer: To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API
clients must submit access tokens issued by that same Identity Provider
*****************************************
>> It is NOT necessary that single sign-on is required to sign in to Anypoint Platform
because we are using an external Identity Provider for Client Management
>> It is NOT necessary that all APIs managed by Anypoint Platform must be protected by
SAML 2.0 policies because we are using an external Identity Provider for Client
Management
>> Not TRUE that the application network must include System APIs that interact with the
Identity Provider because we are using an external Identity Provider for Client Management
Only TRUE statement in the given options is - "To invoke OAuth 2.0-protected APIs
managed by Anypoint Platform, API clients must submit access tokens issued by that same
Identity Provider"
References:
https://docs.mulesoft.com/api-manager/2.x/external-oauth-2.0-token-validation-policy
https://blogs.mulesoft.com/dev/api-dev/api-security-ways-to-authenticate-and-authorize/
A company deploys Mule applications with default configurations through Runtime Manager to customer-hosted Mule runtimes. Each Mule application is an API implementation that exposes RESTful interfaces to API clients. The Mule runtimes are managed by the MuleSoft-hosted control plane. The payload is never used by any Logger components. When an API client sends an HTTP request to a customer-hosted Mule application, which metadata or data (payload) is pushed to the MuleSoft-hosted control plane?
A. Only the data
B. No data
C. The data and metadata
D. Only the metadata
The responses to some HTTP requests can be cached depending on the HTTP verb used
in the request. According to the HTTP specification, for what HTTP verbs is this safe to do?
A.
PUT, POST, DELETE
B.
GET, HEAD, POST
C.
GET, PUT, OPTIONS
D.
GET, OPTIONS, HEAD
GET, OPTIONS, HEAD
A system API has a guaranteed SLA of 100 ms per request. The system API is deployed to a primary environment as well as to a disaster recovery (DR) environment, with different DNS names in each environment. An upstream process API invokes the system API and the main goal of this process API is to respond to client requests in the least possible time. In what order should the system APIs be invoked, and what changes should be made in order to speed up the response time for requests from the process API?
A. In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment, and ONLY use the first response
B. In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment using a scatter-gather configured with a timeout, and then merge the responses
C. Invoke the system API deployed to the primary environment, and if it fails, invoke the system API deployed to the DR environment
D. Invoke ONLY the system API deployed to the primary environment, and add timeout and retry logic to avoid intermittent failures
Explanation: Explanation
Correct Answer: In parallel, invoke the system API deployed to the primary environment
and the system API deployed to the DR environment, and ONLY use the first response.
*****************************************
>> The API requirement in the given scenario is to respond in least possible time.
>> The option that is suggesting to first try the API in primary environment and then
fallback to API in DR environment would result in successful response but NOT in least
possible time. So, this is NOT a right choice of implementation for given requirement.
>> Another option that is suggesting to ONLY invoke API in primary environment and to
add timeout and retries may also result in successful response upon retries but NOT in
least possible time. So, this is also NOT a right choice of implementation for given
requirement.
>> One more option that is suggesting to invoke API in primary environment and API in DR
environment in parallel using Scatter-Gather would result in wrong API response as it
would return merged results and moreover, Scatter-Gather does things in parallel which is
true but still completes its scope only on finishing all routes inside it. So again, NOT a right
choice of implementation for given requirement
The Correct choice is to invoke the API in primary environment and the API in DR
environment parallelly, and using ONLY the first response received from one of them
Refer to the exhibit.

A. Option A
B. Option B
C. Option C
D. Option D
Explanation:
Correct Answer: Allow System APIs to return data that is NOT currently required by the
identified Process or Experience APIs.

What CANNOT be effectively enforced using an API policy in Anypoint Platform?
A.
Guarding against Denial of Service attacks
B.
Maintaining tamper-proof credentials between APIs
C.
Logging HTTP requests and responses
D.
Backend system overloading
Guarding against Denial of Service attacks
Explanation: Explanation
Correct Answer: Guarding against Denial of Service attacks
*****************************************
>> Backend system overloading can be handled by enforcing "Spike Control Policy"
>> Logging HTTP requests and responses can be done by enforcing "Message Logging
Policy"
>> Credentials can be tamper-proofed using "Security" and "Compliance" Policies
However, unfortunately, there is no proper way currently on Anypoint Platform to guard
against DOS attacks.
Reference: https://help.mulesoft.com/s/article/DDos-Dos-at
| Page 1 out of 19 Pages |