When could the API data model of a System API reasonably mimic the data model
exposed by the corresponding backend system, with minimal improvements over the
backend system's data model?
A.
When there is an existing Enterprise Data Model widely used across the organization
B.
When the System API can be assigned to a bounded context with a corresponding data
model
C.
When a pragmatic approach with only limited isolation from the backend system is deemed appropriate
D.
When the corresponding backend system is expected to be replaced in the near future
When a pragmatic approach with only limited isolation from the backend system is deemed appropriate
Explanation: Explanation
Correct Answer: When a pragmatic approach with only limited isolation from the backend
system is deemed appropriate.
*****************************************
General guidance w.r.t choosing Data Models:
>> If an Enterprise Data Model is in use then the API data model of System APIs should
make use of data types from that Enterprise Data Model and the corresponding API
implementation should translate between these data types from the Enterprise Data Model
and the native data model of the backend system.
>> If no Enterprise Data Model is in use then each System API should be assigned to a
Bounded Context, the API data model of System APIs should make use of data types from
the corresponding Bounded Context Data Model and the corresponding API
implementation should translate between these data types from the Bounded Context Data
Model and the native data model of the backend system. In this scenario, the data types in
the Bounded Context Data Model are defined purely in terms of their business
characteristics and are typically not related to the native data model of the backend system.
In other words, the translation effort may be significant.
>> If no Enterprise Data Model is in use, and the definition of a clean Bounded Context
Data Model is considered too much effort, then the API data model of System APIs should
make use of data types that approximately mirror those from the backend system, same
semantics and naming as backend system, lightly sanitized, expose all fields needed for
the given System API’s functionality, but not significantly more and making good use of
REST conventions.
The latter approach, i.e., exposing in System APIs an API data model that basically mirrors
that of the backend system, does not provide satisfactory isolation from backend systems
through the System API tier on its own. In particular, it will typically not be possible to
"swap out" a backend system without significantly changing all System APIs in front of that
backend system and therefore the API implementations of all Process APIs that depend on
those System APIs! This is so because it is not desirable to prolong the life of a previous
backend system’s data model in the form of the API data model of System APIs that now
front a new backend system. The API data models of System APIs following this approach
must therefore change when the backend system is replaced.
On the other hand:
>> It is a very pragmatic approach that adds comparatively little overhead over accessing
the backend system directly
>> Isolates API clients from intricacies of the backend system outside the data model
(protocol, authentication, connection pooling, network address, …)
>> Allows the usual API policies to be applied to System APIs
>> Makes the API data model for interacting with the backend system explicit and visible,
by exposing it in the RAML definitions of the System APIs
>> Further isolation from the backend system data model does occur in the API
An IT Security Compliance Auditor is assessing which nonfunctional requirements (NFRs)
are already being implemented to meet security measures.
A. The API invocations are coming from a known subnet range
B. Username/password supported to validate login credentials
C. Sensitive data is masked to prevent compromising critical information
D. The API is protected against XML invocation attacks
E. Performance expectations are to be allowed up to 1,000 requests per second
Refer to the exhibit.
A developer is building a client application to invoke an API deployed to the STAGING
environment that is governed by a client ID enforcement policy.
What is required to successfully invoke the API?
A.
The client ID and secret for the Anypoint Platform account owning the API in the STAGING environment
B.
The client ID and secret for the Anypoint Platform account's STAGING environment
C.
The client ID and secret obtained from Anypoint Exchange for the API instance in the
STAGING environment
D.
A valid OAuth token obtained from Anypoint Platform and its associated client ID and
secret
The client ID and secret obtained from Anypoint Exchange for the API instance in the
STAGING environment
Explanation: Explanation
Correct Answer: The client ID and secret obtained from Anypoint Exchange for the API
instance in the STAGING environment
*****************************************
>> We CANNOT use the client ID and secret of Anypoint Platform account or any individual
environments for accessing the APIs
>> As the type of policy that is enforced on the API in question is "Client ID Enforcment
Policy", OAuth token based access won't work.
Right way to access the API is to use the client ID and secret obtained from Anypoint
Exchange for the API instance in a particular environment we want to work on.
References:
Managing API instance Contracts on API Manager
https://docs.mulesoft.com/api-manager/1.x/request-access-to-api-task
https://docs.mulesoft.com/exchange/to-request-access
https://docs.mulesoft.com/api-manager/2.x/policy-mule3-client-id-based-policies
A retail company is using an Order API to accept new orders. The Order API uses a JMS
queue to submit orders to a backend order management service. The normal load for
orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore.
The CPU load of each CloudHub worker normally runs well below 70%. However, several
times during the year the Order API gets four times (4x) the average number of orders.
This causes the CloudHub worker CPU load to exceed 90% and the order submission time
to exceed 30 seconds. The cause, however, is NOT the backend order management
service, which still responds fast enough to meet the response SLA for the Order API.
What is the MOST resource-efficient way to configure the Mule application's CloudHub
deployment to help the company cope with this performance challenge?
A.
Permanently increase the size of each of the two (2) CloudHub workers by at least four
times (4x) to one (1) vCore
B.
Use a vertical CloudHub autoscaling policy that triggers on CPU utilization greater than
70%
C.
Permanently increase the number of CloudHub workers by four times (4x) to eight (8)
CloudHub workers
D.
Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater
than 70%
Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater
than 70%
Explanation: Explanation
Correct Answer: Use a horizontal CloudHub autoscaling policy that triggers on CPU
utilization greater than 70%
*****************************************
The scenario in the question is very clearly stating that the usual traffic in the year is pretty
well handled by the existing worker configuration with CPU running well below 70%. The
problem occurs only "sometimes" occasionally when there is spike in the number of orders
coming in.
So, based on above, We neither need to permanently increase the size of each worker nor
need to permanently increase the number of workers. This is unnecessary as other than
those "occasional" times the resources are idle and wasted.
We have two options left now. Either to use horizontal Cloudhub autoscaling policy to
automatically increase the number of workers or to use vertical Cloudhub autoscaling
policy to automatically increase the vCore size of each worker.
Here, we need to take two things into consideration:
1. CPU
2. Order Submission Rate to JMS Queue
>> From CPU perspective, both the options (horizontal and vertical scaling) solves the
issue. Both helps to bring down the usage below 90%.
>> However, If we go with Vertical Scaling, then from Order Submission Rate perspective,
as the application is still being load balanced with two workers only, there may not be much
improvement in the incoming request processing rate and order submission rate to JMS
queue. The throughput would be same as before. Only CPU utilization comes down.
>> But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to
increase the throughput as more workers are being load balanced now. This way we can
address both CPU and Order Submission rate.
Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.
A company deploys Mule applications with default configurations through Runtime Manager to customer-hosted Mule runtimes. Each Mule application is an API implementation that exposes RESTful interfaces to API clients. The Mule runtimes are managed by the MuleSoft-hosted control plane. The payload is never used by any Logger components. When an API client sends an HTTP request to a customer-hosted Mule application, which metadata or data (payload) is pushed to the MuleSoft-hosted control plane?
A. Only the data
B. No data
C. The data and metadata
D. Only the metadata
A customer wants to monitor and gain insights about the number of requests coming in a
given time period as well as to measure key performance indicators
(response times, CPU utilization, number of active APIs).
Which tool provides these data insights?
A. Anypoint Monitoring
B. APT Manager
C. Runtime Alerts
D. Functional Monitoring
What is true about where an API policy is defined in Anypoint Platform and how it is then applied to API instances?
A.
The API policy Is defined In Runtime Manager as part of the API deployment to a Mule
runtime, and then ONLY applied to the specific API Instance
B.
The API policy Is defined In API Manager for a specific API Instance, and then ONLY
applied to the specific API instance
C.
The API policy Is defined in API Manager and then automatically applied to ALL API instances
D.
The API policy is defined in API Manager, and then applied to ALL API instances in the
specified environment
The API policy Is defined In API Manager for a specific API Instance, and then ONLY
applied to the specific API instance
Explanation: Explanation
Correct Answer: The API policy is defined in API Manager for a specific API instance, and
then ONLY applied to the specific API instance.
*****************************************
>> Once our API specifications are ready and published to Exchange, we need to visit API
Manager and register an API instance for each API.
>> API Manager is the place where management of API aspects takes place like
addressing NFRs by enforcing policies on them.
>> We can create multiple instances for a same API and manage them differently for
different purposes.
>> One instance can have a set of API policies applied and another instance of same API
can have different set of policies applied for some other purpose.
>> These APIs and their instances are defined PER environment basis. So, one need to
manage them seperately in each environment.
>> We can ensure that same configuration of API instances (SLAs, Policies etc..) gets
promoted when promoting to higher environments using platform feature. But this is
optional only. Still one can change them per environment basis if they have to.
>> Runtime Manager is the place to manage API Implementations and their Mule Runtimes
but NOT APIs itself. Though API policies gets executed in Mule Runtimes, We CANNOT
enforce API policies in Runtime Manager. We would need to do that via API Manager only
for a cherry picked instance in an environment.
So, based on these facts, right statement in the given choices is - "The API policy is
defined in API Manager for a specific API instance, and then ONLY applied to the specific
API instance".
Reference: https://docs.mulesoft.com/api-manager/2.x/latest-overview-concept
Refer to the exhibit.

A. Option A
B. Option B
C. Option C
D. Option D
Explanation:
Correct Answer: Allow System APIs to return data that is NOT currently required by the
identified Process or Experience APIs.

| Page 1 out of 19 Pages |