A team is planning to enhance an Experience API specification, and they are following APIled connectivity design principles. What is their motivation for enhancing the API?
A. The primary API consumer wants certain kinds of endpoints changed from the Center for Enablement standard to the consumer system standard
B. The underlying System API is updated to provide more detailed data for several heavily used resources
C. An IP Allowlist policy is being added to the API instances in the Development and Staging environments
D. A Canonical Data Model is being adopted that impacts several types of data included in the API
Explanation:
In API-led design, an Experience API is enhanced to improve how data is
delivered to end-user applications. One primary reason to enhance an Experience API is
when new data standards, such as a Canonical Data Model, are adopted. Here’s why:
A TemperatureSensors API instance is defined in API Manager in the PROD environment
of the CAR_FACTORY business group. An AcmelemperatureSensors Mule
application implements this API instance and is deployed from Runtime Manager to the
PROD environment of the CAR_FACTORY business group. A policy that requires a valid
client ID and client secret is applied in API Manager to the API instance.
Where can an API consumer obtain a valid client ID and client secret to call the
AcmeTemperatureSensors Mule application?
A. In secrets manager, request access to the Shared Secret static username/password
B. In API Manager, from the PROD environment of the CAR_FACTORY business group
C. In access management, from the PROD environment of the CAR_FACTORY business group
D. In Anypoint Exchange, from an API client application that has been approved for the TemperatureSensors API instance
Explanation:
When an API policy requiring a client ID and client secret is applied to an
API instance in API Manager, API consumers must obtain these credentials through a
registered client application. Here’s how it works:
A company deployed an API to a single worker/replica in the shared cloud in the U.S. West Region. What happens when the Availability Zone experiences an outage?
A. CloudHub will auto-redeploy the APL in the U.S. East Region
B. The APT will be unavailable until the availability comes back online, at which time the worker/replica will be auto-restarted
C. CloudHub will auto-redeploy the API in another Availability Zone in the U.S. West Region
D. The Anypoint Platform admin is alerted when the AP] is experiencing an outage and needs the trigger the CI/CD pipeline to redeploy to the US. East Region
Explanation:
In a CloudHub deployment with a single worker/replica located in a specific
Availability Zone (AZ), if an AZ experiences an outage, here’s what happens:
Worker Availability: Since the application is deployed in a single AZ, CloudHub
does not automatically redeploy the application in a different zone or region during
an outage. Thus, if the current AZ is unavailable, the application will be offline.
Auto-Restart upon AZ Recovery: Once the affected AZ is back online, CloudHub
will auto-restart the worker in the same AZ without manual intervention. This ensures that as soon as the AZ is functional, the application resumes
automatically.
Refer to the exhibit.
An organization uses one specific CloudHub (AWS) region for all CloudHub deployments.
How are CloudHub workers assigned to availability zones (AZs) when the organization's
Mule applications are deployed to CloudHub in that region?
A.
Workers belonging to a given environment are assigned to the same AZ within that region
B.
AZs are selected as part of the Mule application's deployment configuration
C.
Workers are randomly distributed across available AZs within that region
D.
An AZ is randomly selected for a Mule application, and all the Mule application's CloudHub workers are assigned to that one AZ
An AZ is randomly selected for a Mule application, and all the Mule application's CloudHub workers are assigned to that one AZ
Explanation: Explanation
Correct Answer: Workers are randomly distributed across available AZs within that region.
*****************************************
>> Currently, we only have control to choose which AWS Region to choose but there is no
control at all using any configurations or deployment options to decide what Availability
Zone (AZ) to assign to what worker.
>> There are NO fixed or implicit rules on platform too w.r.t assignment of AZ to workers
based on environment or application.
>> They are completely assigned in random. However, cloudhub definitely ensures that
HA is achieved by assigning the workers to more than on AZ so that all workers are not
assigned to same AZ for same application.
: https://help.mulesoft.com/s/question/0D52T000051rqDj/one-cloudhub-aws-region-howcloudhub-
workers-are-assigned-to-availability-zones-azs-
Graphical user interface, application
Description automatically generated
Bottom of Form
Top of Form
What is a best practice when building System APIs?
A.
Document the API using an easily consumable asset like a RAML definition
B.
Model all API resources and methods to closely mimic the operations of the backend system
C.
Build an Enterprise Data Model (Canonical Data Model) for each backend system and apply it to System APIs
D.
Expose to API clients all technical details of the API implementation's interaction wifch
the backend system
Model all API resources and methods to closely mimic the operations of the backend system
Explanation: Explanation
Correct Answer: Model all API resources and methods to closely mimic the operations of
the backend system.
*****************************************
>> There are NO fixed and straight best practices while opting data models for APIs. They
are completly contextual and depends on number of factors. Based upon those factors, an
enterprise can choose if they have to go with Enterprise Canonical Data Model or Bounded
Context Model etc.
>> One should NEVER expose the technical details of API implementation to their API
clients. Only the API interface/ RAML is exposed to API clients.
>> It is true that the RAML definitions of APIs should be as detailed as possible and should
reflect most of the documentation. However, just that is NOT enough to call your API as
best documented API. There should be even more documentation on Anypoint Exchange
with API Notebooks etc. to make and create a developer friendly API and repository..
>> The best practice always when creating System APIs is to create their API interfaces by
modeling their resources and methods to closely reflect the operations and functionalities
of that backend system.
A company is building an application network using MuleSoft's recommendations for various API layers. What is the main (default) role of a process API in an application network?
A. To secure and optimize the data synchronization processing of large data dumps between back-end systems
B. To manage and process the secure direct communication between a back-end system and an end-user client of mobile device in the application network
C. To automate parts of business processes by coordinating and orchestrating the invocation of other APIs in the application network
D. To secure, Manage, and process communication with specific types of end-user client applications or devices in the application network
Explanation:
When could the API data model of a System API reasonably mimic the data model
exposed by the corresponding backend system, with minimal improvements over the
backend system's data model?
A.
When there is an existing Enterprise Data Model widely used across the organization
B.
When the System API can be assigned to a bounded context with a corresponding data
model
C.
When a pragmatic approach with only limited isolation from the backend system is deemed appropriate
D.
When the corresponding backend system is expected to be replaced in the near future
When a pragmatic approach with only limited isolation from the backend system is deemed appropriate
Explanation: Explanation
Correct Answer: When a pragmatic approach with only limited isolation from the backend
system is deemed appropriate.
*****************************************
General guidance w.r.t choosing Data Models:
>> If an Enterprise Data Model is in use then the API data model of System APIs should
make use of data types from that Enterprise Data Model and the corresponding API
implementation should translate between these data types from the Enterprise Data Model
and the native data model of the backend system.
>> If no Enterprise Data Model is in use then each System API should be assigned to a
Bounded Context, the API data model of System APIs should make use of data types from
the corresponding Bounded Context Data Model and the corresponding API
implementation should translate between these data types from the Bounded Context Data
Model and the native data model of the backend system. In this scenario, the data types in
the Bounded Context Data Model are defined purely in terms of their business
characteristics and are typically not related to the native data model of the backend system.
In other words, the translation effort may be significant.
>> If no Enterprise Data Model is in use, and the definition of a clean Bounded Context
Data Model is considered too much effort, then the API data model of System APIs should
make use of data types that approximately mirror those from the backend system, same
semantics and naming as backend system, lightly sanitized, expose all fields needed for
the given System API’s functionality, but not significantly more and making good use of
REST conventions.
The latter approach, i.e., exposing in System APIs an API data model that basically mirrors
that of the backend system, does not provide satisfactory isolation from backend systems
through the System API tier on its own. In particular, it will typically not be possible to
"swap out" a backend system without significantly changing all System APIs in front of that
backend system and therefore the API implementations of all Process APIs that depend on
those System APIs! This is so because it is not desirable to prolong the life of a previous
backend system’s data model in the form of the API data model of System APIs that now
front a new backend system. The API data models of System APIs following this approach
must therefore change when the backend system is replaced.
On the other hand:
>> It is a very pragmatic approach that adds comparatively little overhead over accessing
the backend system directly
>> Isolates API clients from intricacies of the backend system outside the data model
(protocol, authentication, connection pooling, network address, …)
>> Allows the usual API policies to be applied to System APIs
>> Makes the API data model for interacting with the backend system explicit and visible,
by exposing it in the RAML definitions of the System APIs
>> Further isolation from the backend system data model does occur in the API
An Order API triggers a sequence of other API calls to look up details of an order's items in
a back-end inventory database. The Order API calls the OrderItems process API, which
calls the Inventory system API. The Inventory system API performs database operations in
the back-end inventory database.
The network connection between the Inventory system API and the database is known to
be unreliable and hang at unpredictable times.
Where should a two-second timeout be configured in the API processing sequence so that
the Order API never waits more than two seconds for a response from the Orderltems
process API?

A. In the Orderltems process API implementation
B. In the Order API implementation
C. In the Inventory system API implementation
D. In the inventory database
| Page 1 out of 19 Pages |