A large organization with an experienced central IT department is getting started using MuleSoft. There is a project to connect a siloed back-end system to a new Customer Relationship Management (CRM) system. The Center for Enablement is coaching them to use API-led connectivity. What action would support the creation of an application network using API-led connectivity?
A. Invite the business analyst to create a business process model to specify the canonical data model between the two systems
B. Determine if the new CRM system supports the creation of custom: REST APIs, establishes 4 private network with CloudHub, and supports GAuth 2.0 authentication
C. To expedite this project, central IT should extend the CRM system and back-end systems to connect to one another using built in integration interfaces
D. Create a System API to unlock the data on the back-end system using a REST API
Explanation:
For an organization starting with API-led connectivity to integrate a siloed
back-end system with a new CRM, the following approach aligns with best practices and
MuleSoft’s Center for Enablement (C4E) guidance:
API-led Connectivity: This model organizes APIs into distinct layers (System,
Process, and Experience) to improve reusability, modularity, and manageability.
An organization wants to create a Center for Enablement (C4E). The IT director schedules a series of meetings with IT senior managers. What should be on the agenda of the first meeting?
A. Define C4E objectives, mission statement, guiding principles, a
B. Explore API monetization options based on identified use cases through MuleSoft
C. A walk through of common-services best practices for logging, auditing, exception handling, caching, security via policy, and rate limiting/throttling via policy
D. Specify operating model for the MuleSoft Integrations division
Explanation:
In the initial meeting for establishing a Center for Enablement (C4E), it’s
essential to lay the foundational vision, objectives, and guiding principles for the team.
Here’s why this is crucial:
A company has started to create an application network and is now planning to implement a Center for Enablement (C4E) organizational model. What key factor would lead the company to decide upon a federated rather than a centralized C4E?
A.
When there are a large number of existing common assets shared by development teams
B.
When various teams responsible for creating APIs are new to integration and hence need extensive training
C.
When development is already organized into several independent initiatives or groups
D.
When the majority of the applications in the application network are cloud based
When development is already organized into several independent initiatives or groups
Explanation: Explanation
Correct Answer: When development is already organized into several independent
initiatives or groups
*****************************************
>> It would require lot of process effort in an organization to have a single C4E team
coordinating with multiple already organized development teams which are into several
independent initiatives. A single C4E works well with different teams having at least a
common initiative. So, in this scenario, federated C4E works well instead of centralized
C4E.
A new upstream API Is being designed to offer an SLA of 500 ms median and 800 ms
maximum (99th percentile) response time. The corresponding API implementation needs to
sequentially invoke 3 downstream APIs of very similar complexity.
The first of these downstream APIs offers the following SLA for its response time: median:
100 ms, 80th percentile: 500 ms, 95th percentile: 1000 ms.
If possible, how can a timeout be set in the upstream API for the invocation of the first
downstream API to meet the new upstream API's desired SLA?
A.
Set a timeout of 50 ms; this times out more invocations of that API but gives additional
room for retries
B.
Set a timeout of 100 ms; that leaves 400 ms for the other two downstream APIs to complete
C.
No timeout is possible to meet the upstream API's desired SLA; a different SLA must be
negotiated with the first downstream API or invoke an alternative API
D.
Do not set a timeout; the Invocation of this API Is mandatory and so we must wait until it
responds
Set a timeout of 100 ms; that leaves 400 ms for the other two downstream APIs to complete
Explanation:
Explanation
Correct Answer: Set a timeout of 100ms; that leaves 400ms for other two downstream APIs
to complete
*****************************************
Key details to take from the given scenario:
>> Upstream API's designed SLA is 500ms (median). Lets ignore maximum SLA response
times.
>> This API calls 3 downstream APIs sequentially and all these are of similar complexity.
>> The first downstream API is offering median SLA of 100ms, 80th percentile: 500ms;
95th percentile: 1000ms.
Based on the above details:
>> We can rule out the option which is suggesting to set 50ms timeout. Because, if the
median SLA itself being offered is 100ms then most of the calls are going to timeout and
time gets wasted in retried them and eventually gets exhausted with all retries. Even if
some retries gets successful, the remaining time wont leave enough room for 2nd and 3rd
downstream APIs to respond within time.
>> The option suggesting to NOT set a timeout as the invocation of this API is mandatory
and so we must wait until it responds is silly. As not setting time out would go against the
good implementation pattern and moreover if the first API is not responding within its
offered median SLA 100ms then most probably it would either respond in 500ms (80th
percentile) or 1000ms (95th percentile). In BOTH cases, getting a successful response
from 1st downstream API does NO GOOD because already by this time the Upstream API
SLA of 500 ms is breached. There is no time left to call 2nd and 3rd downstream APIs.
>> It is NOT true that no timeout is possible to meet the upstream APIs desired SLA.
As 1st downstream API is offering its median SLA of 100ms, it means MOST of the time we
would get the responses within that time. So, setting a timeout of 100ms would be ideal for
MOST calls as it leaves enough room of 400ms for remaining 2 downstream API calls.
An organization has several APIs that accept JSON data over HTTP POST. The APIs are
all publicly available and are associated with several mobile applications and web
applications.
The organization does NOT want to use any authentication or compliance policies for these
APIs, but at the same time, is worried that some bad actor could send payloads that could
somehow compromise the applications or servers running the API implementations.
What out-of-the-box Anypoint Platform policy can address exposure to this threat?
A.
Shut out bad actors by using HTTPS mutual authentication for all API invocations
B.
Apply an IP blacklist policy to all APIs; the blacklist will Include all bad actors
C.
Apply a Header injection and removal policy that detects the malicious data before it is used
D.
Apply a JSON threat protection policy to all APIs to detect potential threat vectors
Apply a JSON threat protection policy to all APIs to detect potential threat vectors
Explanation: Explanation
Correct Answer: Apply a JSON threat protection policy to all APIs to detect potential threat
vectors
*****************************************
>> Usually, if the APIs are designed and developed for specific consumers (known
consumers/customers) then we would IP Whitelist the same to ensure that traffic only
comes from them.
>> However, as this scenario states that the APIs are publicly available and being used by
so many mobile and web applications, it is NOT possible to identify and blacklist all
possible bad actors.
>> So, JSON threat protection policy is the best chance to prevent any bad JSON payloads
from such bad actors.
The responses to some HTTP requests can be cached depending on the HTTP verb used
in the request. According to the HTTP specification, for what HTTP verbs is this safe to do?
A.
PUT, POST, DELETE
B.
GET, HEAD, POST
C.
GET, PUT, OPTIONS
D.
GET, OPTIONS, HEAD
GET, OPTIONS, HEAD
A team is planning to enhance an Experience API specification, and they are following APIled connectivity design principles. What is their motivation for enhancing the API?
A. The primary API consumer wants certain kinds of endpoints changed from the Center for Enablement standard to the consumer system standard
B. The underlying System API is updated to provide more detailed data for several heavily used resources
C. An IP Allowlist policy is being added to the API instances in the Development and Staging environments
D. A Canonical Data Model is being adopted that impacts several types of data included in the API
Explanation:
In API-led design, an Experience API is enhanced to improve how data is
delivered to end-user applications. One primary reason to enhance an Experience API is
when new data standards, such as a Canonical Data Model, are adopted. Here’s why:
A company wants to move its Mule API implementations into production as quickly as
possible. To protect access to all Mule application data and metadata, the company
requires that all Mule applications be deployed to the company's customer-hosted
infrastructure within the corporate firewall. What combination of runtime plane and control
plane options meets these project lifecycle goals?
A.
Manually provisioned customer-hosted runtime plane and customer-hosted control plane
B.
MuleSoft-hosted runtime plane and customer-hosted control plane
C.
Manually provisioned customer-hosted runtime plane and MuleSoft-hosted control plane
D.
iPaaS provisioned customer-hosted runtime plane and MuleSoft-hosted control plane
Manually provisioned customer-hosted runtime plane and customer-hosted control plane
Explanation:
Explanation
Correct Answer: Manually provisioned customer-hosted runtime plane and customerhosted
control plane
*****************************************
There are two key factors that are to be taken into consideration from the scenario given in
the question.
>> Company requires both data and metadata to be resided within the corporate firewall
>> Company would like to go with customer-hosted infrastructure.
Any deployment model that is to deal with the cloud directly or indirectly (Mulesoft-hosted
or Customer's own cloud like Azure, AWS) will have to share atleast the metadata.
Application data can be controlled inside firewall by having Mule Runtimes on customer
hosted runtime plane. But if we go with Mulsoft-hosted/ Cloud-based control plane, the
control plane required atleast some minimum level of metadata to be sent outside the
corporate firewall.
As the customer requirement is pretty clear about the data and metadata both to be within
the corporate firewall, even though customer wants to move to production as quickly as
possible, unfortunately due to the nature of their security requirements, they have no other
option but to go with manually provisioned customer-hosted runtime plane and customerhosted
control plane.
| Page 1 out of 19 Pages |