An Order API triggers a sequence of other API calls to look up details of an order's items in
a back-end inventory database. The Order API calls the OrderItems process API, which
calls the Inventory system API. The Inventory system API performs database operations in
the back-end inventory database.
The network connection between the Inventory system API and the database is known to
be unreliable and hang at unpredictable times.
Where should a two-second timeout be configured in the API processing sequence so that
the Order API never waits more than two seconds for a response from the Orderltems
process API?

A. In the Orderltems process API implementation
B. In the Order API implementation
C. In the Inventory system API implementation
D. In the inventory database
Refer to the exhibit.
A.
Option A
B.
Option B
C.
Option C
D.
Option D
Option A
Explanation: Explanation
Correct Answer: Ask the Marketing Department to interact with a mocking implementation
of the API using the automatically generated API Console.
*****************************************
As per MuleSoft's IT Operating Model:
>> API consumers need NOT wait until the full API implementation is ready.
>> NO technical test-suites needs to be shared with end users to interact with APIs.
>> Anypoint Platform offers a mocking capability on all the published API specifications to
Anypoint Exchange which also will be rich in documentation covering all details of API
functionalities and working nature.
>> No needs of arranging days of workshops with end users for feedback.
API consumers can use Anypoint Exchange features on the platform and interact with the
API using its mocking feature. The feedback can be shared quickly on the same to
incorporate any changes.
What is a typical result of using a fine-grained rather than a coarse-grained API deployment model to implement a given business process?
A.
A decrease in the number of connections within the application network supporting the business process
B.
A higher number of discoverable API-related assets in the application network
C.
A better response time for the end user as a result of the APIs being smaller in scope and complexity
D.
An overall tower usage of resources because each fine-grained API consumes less resources
A higher number of discoverable API-related assets in the application network
Explanation: Explanation
Correct Answer: A higher number of discoverable API-related assets in the application
network.
*****************************************
>> We do NOT get faster response times in fine-grained approach when compared to
coarse-grained approach.
>> In fact, we get faster response times from a network having coarse-grained APIs
compared to a network having fine-grained APIs model. The reasons are below.
Fine-grained approach:
1. will have more APIs compared to coarse-grained
2. So, more orchestration needs to be done to achieve a functionality in business process.
3. Which means, lots of API calls to be made. So, more connections will needs to be
established. So, obviously more hops, more network i/o, more number of integration points
compared to coarse-grained approach where fewer APIs with bulk functionality embedded
in them.
4. That is why, because of all these extra hops and added latencies, fine-grained approach
will have bit more response times compared to coarse-grained.
5. Not only added latencies and connections, there will be more resources used up in finegrained
approach due to more number of APIs.
That's why, fine-grained APIs are good in a way to expose more number of resuable assets
in your network and make them discoverable. However, needs more maintenance, taking
care of integration points, connections, resources with a little compromise w.r.t network
hops and response times.
A company stores financial transaction data in two legacy systems. For each legacy
system, a separate, dedicated System API (SAPI) exposes data for that legacy system. A
Process API (PAPI) merges the data retrieved from ail of the System APIs into a common
format. Several API clients call the PAPI through its public domain name.
The company now wants to expose a subset of financial data to a newly developed mobile
application that uses a different Bounded Context Data Model. The company wants to
follow MuleSoft's best practices for building out an effective application network.
Following MuleSoft's best practices, how can the company expose financial data needed
by the mobile application in a way that minimizes the impact on the currently running API
clients, API implementations, and support asset reuse?
A. Add two new Experience APIs (EAPI-i and EAPI-2}.
Add Mobile PAPI-2 to expose the Intended subset of financial data as requested.
Both PAPIs access the Legacy Systems via SAPI-1 and SAP]-2.
B. Add two new Experience APIs (EAPI-i and EAPI-2}.
Add Mobile PAPI-2 to expose the Intended subset of financial data as requested.
Both PAPIs access the Legacy Systems via SAPI-1 and SAP]-2.
C. Create a new mobile Experince API (EAPI) chat exposes that subset of PAPI endpoints.
Add transformtion login to the mobile Experince API implementation to make mobile data
compatible with the required PAPIs.

D. Develop and deploy is new PAPI implementation with data transformation and ... login to
support this required endpoints of both mobile and web clients.
Deploy an API Proxy with an endpoint from API Manager that redirect the existing PAPI
endpoints to the new PAPI.
Explanation:
To achieve the goal of exposing financial data to a new mobile application while following
MuleSoft’s best practices, the company should follow an API-led connectivity approach.
This approach ensures minimal disruption to existing clients, maximizes reusability, and
respects the separation of concerns across API layers.
Explanation of Solution:
Experience APIs for Client-Specific Requirements:
Process API Layer for Data Transformation:
Reuse of System APIs:
Why Option A is Correct:
Explanation of Incorrect Options:
Option B: This option seems similar but lacks clarity on the separation of mobilespecific
requirements and does not explicitly mention data transformation, which is
essential in this scenario.
Option C: Creating a single mobile Experience API that exposes a subset of PAPI
endpoints directly adds unnecessary complexity and may violate the separation of
concerns, as transformation logic should not be in the Experience layer.
Option D: Deploying a new PAPI and using an API Proxy to redirect existing
endpoints would add unnecessary complexity, disrupt the current API clients, and
increase maintenance efforts.
References:
For additional guidance, refer to MuleSoft documentation on API-led
connectivity best practices and best practices for structuring Experience, Process, and
System APIs.
When should idempotency be taken into account?
A. When making requests to update currently locked entities
B. When storing the results of s previous request for use in response to subsequent requests
C. When sending concurrent update requests for the same entity
D. When preventing duplicate processing from multiple sent requests
A company wants to move its Mule API implementations into production as quickly as
possible. To protect access to all Mule application data and metadata, the company
requires that all Mule applications be deployed to the company's customer-hosted
infrastructure within the corporate firewall. What combination of runtime plane and control
plane options meets these project lifecycle goals?
A.
Manually provisioned customer-hosted runtime plane and customer-hosted control plane
B.
MuleSoft-hosted runtime plane and customer-hosted control plane
C.
Manually provisioned customer-hosted runtime plane and MuleSoft-hosted control plane
D.
iPaaS provisioned customer-hosted runtime plane and MuleSoft-hosted control plane
Manually provisioned customer-hosted runtime plane and customer-hosted control plane
Explanation:
Explanation
Correct Answer: Manually provisioned customer-hosted runtime plane and customerhosted
control plane
*****************************************
There are two key factors that are to be taken into consideration from the scenario given in
the question.
>> Company requires both data and metadata to be resided within the corporate firewall
>> Company would like to go with customer-hosted infrastructure.
Any deployment model that is to deal with the cloud directly or indirectly (Mulesoft-hosted
or Customer's own cloud like Azure, AWS) will have to share atleast the metadata.
Application data can be controlled inside firewall by having Mule Runtimes on customer
hosted runtime plane. But if we go with Mulsoft-hosted/ Cloud-based control plane, the
control plane required atleast some minimum level of metadata to be sent outside the
corporate firewall.
As the customer requirement is pretty clear about the data and metadata both to be within
the corporate firewall, even though customer wants to move to production as quickly as
possible, unfortunately due to the nature of their security requirements, they have no other
option but to go with manually provisioned customer-hosted runtime plane and customerhosted
control plane.
An established communications company is beginning its API-led connectivity journey, The
company has been using a successful Enterprise Data Model for many years. The company has identified a self-service account management app as the first effort for APIled,
and it has identified the following APIs.
A. Customer SAPI
B. Customer Lookup PAPI
C. Mobile Account Management EAPI
D. Service SAPI
Explanation: In the API-led connectivity approach, APIs are categorized into Experience,
Process, and System layers:
Enterprise Data Model Scope:
Why Option C is Correct:
Explanation of Incorrect Options:
References:
For additional guidance, review MuleSoft's best practices on API-led
connectivity and data modeling.
Due to a limitation in the backend system, a system API can only handle up to 500
requests per second. What is the best type of API policy to apply to the system API to avoid overloading the backend system?
A.
Rate limiting
B.
HTTP caching
C.
Rate limiting - SLA based
D.
Spike control
Spike control
Explanation: Explanation
Correct Answer: Spike control
*****************************************
>> First things first, HTTP Caching policy is for purposes different than avoiding the
backend system from overloading. So this is OUT.
>> Rate Limiting and Throttling/ Spike Control policies are designed to limit API access, but
have different intentions.
>> Rate limiting protects an API by applying a hard limit on its access.
>> Throttling/ Spike Control shapes API access by smoothing spikes in traffic.
That is why, Spike Control is the right option
| Page 1 out of 19 Pages |