Mule applications that implement a number of REST APIs are deployed to their own subnet
that is inaccessible from outside the organization.
External business-partners need to access these APIs, which are only allowed to be
invoked from a separate subnet dedicated to partners - called Partner-subnet. This subnet
is accessible from the public internet, which allows these external partners to reach it.
Anypoint Platform and Mule runtimes are already deployed in Partner-subnet. These Mule
runtimes can already access the APIs.
What is the most resource-efficient solution to comply with these requirements, while
having the least impact on other applications that are currently using the APIs?
A.
Implement (or generate) an API proxy Mule application for each of the APIs, then deploy the API proxies to the Mule runtimes
B.
Redeploy the API implementations to the same servers running the Mule runtimes
C.
Add an additional endpoint to each API for partner-enablement consumption
D.
Duplicate the APIs as Mule applications, then deploy them to the Mule runtimes
Implement (or generate) an API proxy Mule application for each of the APIs, then deploy the API proxies to the Mule runtimes
Due to a limitation in the backend system, a system API can only handle up to 500
requests per second. What is the best type of API policy to apply to the system API to avoid overloading the backend system?
A.
Rate limiting
B.
HTTP caching
C.
Rate limiting - SLA based
D.
Spike control
Spike control
Explanation: Explanation
Correct Answer: Spike control
*****************************************
>> First things first, HTTP Caching policy is for purposes different than avoiding the
backend system from overloading. So this is OUT.
>> Rate Limiting and Throttling/ Spike Control policies are designed to limit API access, but
have different intentions.
>> Rate limiting protects an API by applying a hard limit on its access.
>> Throttling/ Spike Control shapes API access by smoothing spikes in traffic.
That is why, Spike Control is the right option
A system API is deployed to a primary environment as well as to a disaster recovery (DR)
environment, with different DNS names in each environment. A process API is a client to
the system API and is being rate limited by the system API, with different limits in each of
the environments. The system API's DR environment provides only 20% of the rate limiting
offered by the primary environment. What is the best API fault-tolerant invocation strategy
to reduce overall errors in the process API, given these conditions and constraints?
A.
Invoke the system API deployed to the primary environment; add timeout and retry logic to
the process API to avoid intermittent failures; if it still fails, invoke the system API deployed
to the DR environment
B.
Invoke the system API deployed to the primary environment; add retry logic to the process
API to handle intermittent failures by invoking the system API deployed to the DR
environment
C.
In parallel, invoke the system API deployed to the primary environment and the system API
deployed to the DR environment; add timeout and retry logic to the process API to avoid
intermittent failures; add logic to the process API to combine the results
D.
Invoke the system API deployed to the primary environment; add timeout and retry logic to
the process API to avoid intermittent failures; if it still fails, invoke a copy of the process API
deployed to the DR environment
Invoke the system API deployed to the primary environment; add timeout and retry logic to
the process API to avoid intermittent failures; if it still fails, invoke the system API deployed
to the DR environment
Explanation: Explanation
Correct Answer: Invoke the system API deployed to the primary environment; add timeout
and retry logic to the process API to avoid intermittent failures; if it still fails, invoke the
system API deployed to the DR environment
*****************************************
There is one important consideration to be noted in the question which is - System API in
DR environment provides only 20% of the rate limiting offered by the primary environment.
So, comparitively, very less calls will be allowed into the DR environment API opposed to
its primary environment. With this in mind, lets analyse what is the right and best faulttolerant
invocation strategy.
1. Invoking both the system APIs in parallel is definitely NOT a feasible approach because
of the 20% limitation we have on DR environment. Calling in parallel every time would
easily and quickly exhaust the rate limits on DR environment and may not give chance to
genuine intermittent error scenarios to let in during the time of need.
2. Another option given is suggesting to add timeout and retry logic to process API while
invoking primary environment's system API. This is good so far. However, when all retries
failed, the option is suggesting to invoke the copy of process API on DR environment which
is not right or recommended. Only system API is the one to be considered for fallback and
not the whole process API. Process APIs usually have lot of heavy orchestration calling
many other APIs which we do not want to repeat again by calling DR's process API. So this
option is NOT right.
3. One more option given is suggesting to add the retry (no timeout) logic to process API to
directly retry on DR environment's system API instead of retrying the primary environment
system API first. This is not at all a proper fallback. A proper fallback should occur only
after all retries are performed and exhausted on Primary environment first. But here, the
option is suggesting to directly retry fallback API on first failure itself without trying main
API. So, this option is NOT right too.
This leaves us one option which is right and best fit.
- Invoke the system API deployed to the primary environment
- Add Timeout and Retry logic on it in process API
- If it fails even after all retries, then invoke the system API deployed to the DR
environment.
When could the API data model of a System API reasonably mimic the data model
exposed by the corresponding backend system, with minimal improvements over the
backend system's data model?
A.
When there is an existing Enterprise Data Model widely used across the organization
B.
When the System API can be assigned to a bounded context with a corresponding data
model
C.
When a pragmatic approach with only limited isolation from the backend system is deemed appropriate
D.
When the corresponding backend system is expected to be replaced in the near future
When a pragmatic approach with only limited isolation from the backend system is deemed appropriate
Explanation: Explanation
Correct Answer: When a pragmatic approach with only limited isolation from the backend
system is deemed appropriate.
*****************************************
General guidance w.r.t choosing Data Models:
>> If an Enterprise Data Model is in use then the API data model of System APIs should
make use of data types from that Enterprise Data Model and the corresponding API
implementation should translate between these data types from the Enterprise Data Model
and the native data model of the backend system.
>> If no Enterprise Data Model is in use then each System API should be assigned to a
Bounded Context, the API data model of System APIs should make use of data types from
the corresponding Bounded Context Data Model and the corresponding API
implementation should translate between these data types from the Bounded Context Data
Model and the native data model of the backend system. In this scenario, the data types in
the Bounded Context Data Model are defined purely in terms of their business
characteristics and are typically not related to the native data model of the backend system.
In other words, the translation effort may be significant.
>> If no Enterprise Data Model is in use, and the definition of a clean Bounded Context
Data Model is considered too much effort, then the API data model of System APIs should
make use of data types that approximately mirror those from the backend system, same
semantics and naming as backend system, lightly sanitized, expose all fields needed for
the given System API’s functionality, but not significantly more and making good use of
REST conventions.
The latter approach, i.e., exposing in System APIs an API data model that basically mirrors
that of the backend system, does not provide satisfactory isolation from backend systems
through the System API tier on its own. In particular, it will typically not be possible to
"swap out" a backend system without significantly changing all System APIs in front of that
backend system and therefore the API implementations of all Process APIs that depend on
those System APIs! This is so because it is not desirable to prolong the life of a previous
backend system’s data model in the form of the API data model of System APIs that now
front a new backend system. The API data models of System APIs following this approach
must therefore change when the backend system is replaced.
On the other hand:
>> It is a very pragmatic approach that adds comparatively little overhead over accessing
the backend system directly
>> Isolates API clients from intricacies of the backend system outside the data model
(protocol, authentication, connection pooling, network address, …)
>> Allows the usual API policies to be applied to System APIs
>> Makes the API data model for interacting with the backend system explicit and visible,
by exposing it in the RAML definitions of the System APIs
>> Further isolation from the backend system data model does occur in the API
A European company has customers all across Europe, and the IT department is migrating from an older platform to MuleSoft. The main requirements are that the new platform should allow redeployments with zero downtime and deployment of applications to multiple runtime versions, provide security and speed, and utilize Anypoint MQ as the message service. Which runtime plane should the company select based on the requirements without additional network configuration?
A. Runtime Fabric on VMs / Bare Metal for the runtime plane
B. Customer-hosted runtime plane
C. MuleSoft-hosted runtime plane (CloudHub)
D. Anypoint Runtime Fabric on Self-Managed Kubernetes for the runtime plane
Explanation:
For a European company with requirements such as zero-downtime
redeployment, deployment to multiple runtime versions, secure and fast
performance, and the use of Anypoint MQ without additional network configuration,
CloudHub is the best choice for the following reasons:
A retail company is using an Order API to accept new orders. The Order API uses a JMS
queue to submit orders to a backend order management service. The normal load for
orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore.
The CPU load of each CloudHub worker normally runs well below 70%. However, several
times during the year the Order API gets four times (4x) the average number of orders.
This causes the CloudHub worker CPU load to exceed 90% and the order submission time
to exceed 30 seconds. The cause, however, is NOT the backend order management
service, which still responds fast enough to meet the response SLA for the Order API.
What is the MOST resource-efficient way to configure the Mule application's CloudHub
deployment to help the company cope with this performance challenge?
A.
Permanently increase the size of each of the two (2) CloudHub workers by at least four
times (4x) to one (1) vCore
B.
Use a vertical CloudHub autoscaling policy that triggers on CPU utilization greater than
70%
C.
Permanently increase the number of CloudHub workers by four times (4x) to eight (8)
CloudHub workers
D.
Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater
than 70%
Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater
than 70%
Explanation: Explanation
Correct Answer: Use a horizontal CloudHub autoscaling policy that triggers on CPU
utilization greater than 70%
*****************************************
The scenario in the question is very clearly stating that the usual traffic in the year is pretty
well handled by the existing worker configuration with CPU running well below 70%. The
problem occurs only "sometimes" occasionally when there is spike in the number of orders
coming in.
So, based on above, We neither need to permanently increase the size of each worker nor
need to permanently increase the number of workers. This is unnecessary as other than
those "occasional" times the resources are idle and wasted.
We have two options left now. Either to use horizontal Cloudhub autoscaling policy to
automatically increase the number of workers or to use vertical Cloudhub autoscaling
policy to automatically increase the vCore size of each worker.
Here, we need to take two things into consideration:
1. CPU
2. Order Submission Rate to JMS Queue
>> From CPU perspective, both the options (horizontal and vertical scaling) solves the
issue. Both helps to bring down the usage below 90%.
>> However, If we go with Vertical Scaling, then from Order Submission Rate perspective,
as the application is still being load balanced with two workers only, there may not be much
improvement in the incoming request processing rate and order submission rate to JMS
queue. The throughput would be same as before. Only CPU utilization comes down.
>> But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to
increase the throughput as more workers are being load balanced now. This way we can
address both CPU and Order Submission rate.
Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.
Which layer in the API-led connectivity focuses on unlocking key systems, legacy systems, data sources etc and exposes the functionality?
A.
Experience Layer
B.
Process Layer
C.
System Layer
System Layer
Explanation: Explanation
Correct Answer: System Layer
An API implementation returns three X-RateLimit-* HTTP response headers to a requesting API client. What type of information do these response headers indicate to the API client?
A.
The error codes that result from throttling
B.
A correlation ID that should be sent in the next request
C.
The HTTP response size
D.
The remaining capacity allowed by the API implementation
The remaining capacity allowed by the API implementation
Explanation: Explanation
Correct Answer: The remaining capacity allowed by the API implementation.
*****************************************
>> Reference: https://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling-slabased-
policies#response-headers
| Page 1 out of 19 Pages |