What API policy would LEAST likely be applied to a Process API?
A.
Custom circuit breaker
B.
Client ID enforcement
C.
Rate limiting
D.
JSON threat protection
JSON threat protection
Explanation: Explanation
Correct Answer: JSON threat protection
*****************************************
Fact: Technically, there are no restrictions on what policy can be applied in what layer. Any
policy can be applied on any layer API. However, context should also be considered
properly before blindly applying the policies on APIs.
That is why, this question asked for a policy that would LEAST likely be applied to a
Process API.
From the given options:
>> All policies except "JSON threat protection" can be applied without hesitation to the
APIs in Process tier.
>> JSON threat protection policy ideally fits for experience APIs to prevent suspicious
JSON payload coming from external API clients. This covers more of a security aspect by
trying to avoid possibly malicious and harmful JSON payloads from external clients calling
experience APIs.
As external API clients are NEVER allowed to call Process APIs directly and also these
kind of malicious and harmful JSON payloads are always stopped at experience API layer
only using this policy, it is LEAST LIKELY that this same policy is again applied on Process
Layer API.
What is a typical result of using a fine-grained rather than a coarse-grained API deployment model to implement a given business process?
A.
A decrease in the number of connections within the application network supporting the business process
B.
A higher number of discoverable API-related assets in the application network
C.
A better response time for the end user as a result of the APIs being smaller in scope and complexity
D.
An overall tower usage of resources because each fine-grained API consumes less resources
A higher number of discoverable API-related assets in the application network
Explanation: Explanation
Correct Answer: A higher number of discoverable API-related assets in the application
network.
*****************************************
>> We do NOT get faster response times in fine-grained approach when compared to
coarse-grained approach.
>> In fact, we get faster response times from a network having coarse-grained APIs
compared to a network having fine-grained APIs model. The reasons are below.
Fine-grained approach:
1. will have more APIs compared to coarse-grained
2. So, more orchestration needs to be done to achieve a functionality in business process.
3. Which means, lots of API calls to be made. So, more connections will needs to be
established. So, obviously more hops, more network i/o, more number of integration points
compared to coarse-grained approach where fewer APIs with bulk functionality embedded
in them.
4. That is why, because of all these extra hops and added latencies, fine-grained approach
will have bit more response times compared to coarse-grained.
5. Not only added latencies and connections, there will be more resources used up in finegrained
approach due to more number of APIs.
That's why, fine-grained APIs are good in a way to expose more number of resuable assets
in your network and make them discoverable. However, needs more maintenance, taking
care of integration points, connections, resources with a little compromise w.r.t network
hops and response times.
Refer to the exhibit.

A. Option A
B. Option B
C. Option C
D. Option D
Explanation:
Correct Answer: Allow System APIs to return data that is NOT currently required by the
identified Process or Experience APIs.

A circuit breaker strategy is planned in order to meet the goal of improved response time
and demand on a downstream API.
A. Create a custom policy that implements the circuit breaker and includes policy template expressions for the required settings
B. Create Anypoint Monitoring alerts for Circuit Open/Closed configurations, and then implement a retry strategy for Circuit Half-Open configuration
C. Add the Circuit Breaker policy to the API instance, and configure the required settings
D. Implement the strategy in a Mule application, and provide the settings in the YAML configuration
Several times a week, an API implementation shows several thousand requests per minute
in an Anypoint Monitoring dashboard, Between these bursts, the
dashboard shows between two and five requests per minute. The API implementation is
running on Anypoint Runtime Fabric with two non-clustered replicas, reserved vCPU 1.0
and vCPU Limit 2.0.
An API consumer has complained about slow response time, and the dashboard shows the
99 percentile is greater than 120 seconds at the time of the complaint. It also shows greater than 90% CPU usage during these time periods.
In manual tests in the QA environment, the API consumer has consistently reproduced the
slow response time and high CPU usage, and there were no other API requests at
this time. In a brainstorming session, the engineering team has created several proposals
to reduce the response time for requests.
Which proposal should be pursued first?
A. Increase the vCPU resources of the API implementation
B. Modify the API client to split the problematic request into smaller, less-demanding requests
C. Increase the number of replicas of the API implementation
D. Throttle the APT client to reduce the number of requests per minute
When could the API data model of a System API reasonably mimic the data model
exposed by the corresponding backend system, with minimal improvements over the
backend system's data model?
A.
When there is an existing Enterprise Data Model widely used across the organization
B.
When the System API can be assigned to a bounded context with a corresponding data
model
C.
When a pragmatic approach with only limited isolation from the backend system is deemed appropriate
D.
When the corresponding backend system is expected to be replaced in the near future
When a pragmatic approach with only limited isolation from the backend system is deemed appropriate
Explanation: Explanation
Correct Answer: When a pragmatic approach with only limited isolation from the backend
system is deemed appropriate.
*****************************************
General guidance w.r.t choosing Data Models:
>> If an Enterprise Data Model is in use then the API data model of System APIs should
make use of data types from that Enterprise Data Model and the corresponding API
implementation should translate between these data types from the Enterprise Data Model
and the native data model of the backend system.
>> If no Enterprise Data Model is in use then each System API should be assigned to a
Bounded Context, the API data model of System APIs should make use of data types from
the corresponding Bounded Context Data Model and the corresponding API
implementation should translate between these data types from the Bounded Context Data
Model and the native data model of the backend system. In this scenario, the data types in
the Bounded Context Data Model are defined purely in terms of their business
characteristics and are typically not related to the native data model of the backend system.
In other words, the translation effort may be significant.
>> If no Enterprise Data Model is in use, and the definition of a clean Bounded Context
Data Model is considered too much effort, then the API data model of System APIs should
make use of data types that approximately mirror those from the backend system, same
semantics and naming as backend system, lightly sanitized, expose all fields needed for
the given System API’s functionality, but not significantly more and making good use of
REST conventions.
The latter approach, i.e., exposing in System APIs an API data model that basically mirrors
that of the backend system, does not provide satisfactory isolation from backend systems
through the System API tier on its own. In particular, it will typically not be possible to
"swap out" a backend system without significantly changing all System APIs in front of that
backend system and therefore the API implementations of all Process APIs that depend on
those System APIs! This is so because it is not desirable to prolong the life of a previous
backend system’s data model in the form of the API data model of System APIs that now
front a new backend system. The API data models of System APIs following this approach
must therefore change when the backend system is replaced.
On the other hand:
>> It is a very pragmatic approach that adds comparatively little overhead over accessing
the backend system directly
>> Isolates API clients from intricacies of the backend system outside the data model
(protocol, authentication, connection pooling, network address, …)
>> Allows the usual API policies to be applied to System APIs
>> Makes the API data model for interacting with the backend system explicit and visible,
by exposing it in the RAML definitions of the System APIs
>> Further isolation from the backend system data model does occur in the API
Which of the following best fits the definition of API-led connectivity?
A.
API-led connectivity is not just an architecture or technology but also a way to organize people and processes for efficient IT delivery in the organization
B.
API-led connectivity is a 3-layered architecture covering Experience, Process and System layers
C.
API-led connectivity is a technology which enabled us to implement Experience, Process and System layer based APIs
API-led connectivity is not just an architecture or technology but also a way to organize people and processes for efficient IT delivery in the organization
Explanation: Explanation
Correct Answer: API-led connectivity is not just an architecture or technology but also a
way to organize people and processes for efficient IT delivery in the organization.
*****************************************
Reference: https://blogs.mulesoft.com/dev/api-dev/what-is-api-led-connectivity/
A manufacturing company has deployed an API implementation to CloudHub and has not configured it to be automatically restarted by CloudHub when the worker is not responding. Which statement is true when no API Client invokes that API implementation?
A. No alert on the API invocations and APT implementation can be raised
B. Alerts on the APT invocation and API implementation can be raised
C. No alert on the API invocations is raised but alerts on the API implementation can be raised
D. Alerts on the API invocations are raised but no alerts on the API implementation can be raised
Explanation:
When an API implementation is deployed on CloudHub without configuring
automatic restarts in case of worker non-responsiveness, MuleSoft’s monitoring and
alerting behavior is as follows:
| Page 1 out of 19 Pages |