Which of the following best fits the definition of API-led connectivity?
A.
API-led connectivity is not just an architecture or technology but also a way to organize people and processes for efficient IT delivery in the organization
B.
API-led connectivity is a 3-layered architecture covering Experience, Process and System layers
C.
API-led connectivity is a technology which enabled us to implement Experience, Process and System layer based APIs
API-led connectivity is not just an architecture or technology but also a way to organize people and processes for efficient IT delivery in the organization
Explanation: Explanation
Correct Answer: API-led connectivity is not just an architecture or technology but also a
way to organize people and processes for efficient IT delivery in the organization.
*****************************************
Reference: https://blogs.mulesoft.com/dev/api-dev/what-is-api-led-connectivity/
Which component monitors APIs and endpoints at scheduled intervals, receives reports about whether tests pass or fail, and displays statistics about API and endpoint performance?
A. API Analytics
B. Anypoint Monitoring dashboards
C. APT Functional Monitoring
D. Anypoint Runtime Manager alerts
Explanation:
When could the API data model of a System API reasonably mimic the data model
exposed by the corresponding backend system, with minimal improvements over the
backend system's data model?
A.
When there is an existing Enterprise Data Model widely used across the organization
B.
When the System API can be assigned to a bounded context with a corresponding data
model
C.
When a pragmatic approach with only limited isolation from the backend system is deemed appropriate
D.
When the corresponding backend system is expected to be replaced in the near future
When a pragmatic approach with only limited isolation from the backend system is deemed appropriate
Explanation: Explanation
Correct Answer: When a pragmatic approach with only limited isolation from the backend
system is deemed appropriate.
*****************************************
General guidance w.r.t choosing Data Models:
>> If an Enterprise Data Model is in use then the API data model of System APIs should
make use of data types from that Enterprise Data Model and the corresponding API
implementation should translate between these data types from the Enterprise Data Model
and the native data model of the backend system.
>> If no Enterprise Data Model is in use then each System API should be assigned to a
Bounded Context, the API data model of System APIs should make use of data types from
the corresponding Bounded Context Data Model and the corresponding API
implementation should translate between these data types from the Bounded Context Data
Model and the native data model of the backend system. In this scenario, the data types in
the Bounded Context Data Model are defined purely in terms of their business
characteristics and are typically not related to the native data model of the backend system.
In other words, the translation effort may be significant.
>> If no Enterprise Data Model is in use, and the definition of a clean Bounded Context
Data Model is considered too much effort, then the API data model of System APIs should
make use of data types that approximately mirror those from the backend system, same
semantics and naming as backend system, lightly sanitized, expose all fields needed for
the given System API’s functionality, but not significantly more and making good use of
REST conventions.
The latter approach, i.e., exposing in System APIs an API data model that basically mirrors
that of the backend system, does not provide satisfactory isolation from backend systems
through the System API tier on its own. In particular, it will typically not be possible to
"swap out" a backend system without significantly changing all System APIs in front of that
backend system and therefore the API implementations of all Process APIs that depend on
those System APIs! This is so because it is not desirable to prolong the life of a previous
backend system’s data model in the form of the API data model of System APIs that now
front a new backend system. The API data models of System APIs following this approach
must therefore change when the backend system is replaced.
On the other hand:
>> It is a very pragmatic approach that adds comparatively little overhead over accessing
the backend system directly
>> Isolates API clients from intricacies of the backend system outside the data model
(protocol, authentication, connection pooling, network address, …)
>> Allows the usual API policies to be applied to System APIs
>> Makes the API data model for interacting with the backend system explicit and visible,
by exposing it in the RAML definitions of the System APIs
>> Further isolation from the backend system data model does occur in the API
An API implementation is deployed on a single worker on CloudHub and invoked by
external API clients (outside of CloudHub). How can an alert be set up that is guaranteed to
trigger AS SOON AS that API implementation stops responding to API invocations?
A.
Implement a heartbeat/health check within the API and invoke it from outside the Anypoint Platform and alert when the heartbeat does not respond
B.
Configure a "worker not responding" alert in Anypoint Runtime Manager
C.
Handle API invocation exceptions within the calling API client and raise an alert from that API client when the API Is unavailable
D.
Create an alert for when the API receives no requests within a specified time period
Configure a "worker not responding" alert in Anypoint Runtime Manager
Explanation: Explanation
Correct Answer: Configure a “Worker not responding” alert in Anypoint Runtime Manager.
*****************************************
>> All the options eventually helps to generate the alert required when the application stops
responding.
>> However, handling exceptions within calling API and then raising alert from API client is
inappropriate and silly. There could be many API clients invoking the API implementation
and it is not ideal to have this setup consistently in all of them. Not a realistic way to do.
>> Implementing a health check/ heartbeat with in the API and calling from outside to
detmine the health sounds OK but needs extra setup for it and same time there are very
good chances of generating false alarms when there are any intermittent network issues
between external tool calling the health check API on API implementation. The API
implementation itself may not have any issues but due to some other factors some false
alarms may go out.
>> Creating an alert in API Manager when the API receives no requests within a specified
time period would actually generate realistic alerts but even here some false alarms may
go out when there are genuinely no requests from API clients.
The best and right way to achieve this requirement is to setup an alert on Runtime
Manager with a condition "Worker not responding". This would generate an alert
AS SOON AS the workers become unresponsive.
A company wants to move its Mule API implementations into production as quickly as
possible. To protect access to all Mule application data and metadata, the company
requires that all Mule applications be deployed to the company's customer-hosted
infrastructure within the corporate firewall. What combination of runtime plane and control
plane options meets these project lifecycle goals?
A.
Manually provisioned customer-hosted runtime plane and customer-hosted control plane
B.
MuleSoft-hosted runtime plane and customer-hosted control plane
C.
Manually provisioned customer-hosted runtime plane and MuleSoft-hosted control plane
D.
iPaaS provisioned customer-hosted runtime plane and MuleSoft-hosted control plane
Manually provisioned customer-hosted runtime plane and customer-hosted control plane
Explanation:
Explanation
Correct Answer: Manually provisioned customer-hosted runtime plane and customerhosted
control plane
*****************************************
There are two key factors that are to be taken into consideration from the scenario given in
the question.
>> Company requires both data and metadata to be resided within the corporate firewall
>> Company would like to go with customer-hosted infrastructure.
Any deployment model that is to deal with the cloud directly or indirectly (Mulesoft-hosted
or Customer's own cloud like Azure, AWS) will have to share atleast the metadata.
Application data can be controlled inside firewall by having Mule Runtimes on customer
hosted runtime plane. But if we go with Mulsoft-hosted/ Cloud-based control plane, the
control plane required atleast some minimum level of metadata to be sent outside the
corporate firewall.
As the customer requirement is pretty clear about the data and metadata both to be within
the corporate firewall, even though customer wants to move to production as quickly as
possible, unfortunately due to the nature of their security requirements, they have no other
option but to go with manually provisioned customer-hosted runtime plane and customerhosted
control plane.
An enterprise is embarking on the API-led digital transformation journey, and the central IT team has started to define System APIs. Currently there is no Enterprise Data Model being defined within the enterprise, and the definition of a clean Bounded Context Data Model requires too much effort. According to MuleSoft's recommended guidelines, how should the System API data model be defined?
A. If there are misspellings of the data fields in the back-end system, Systerm APIs should not correct it, and expose it as-is to mirror the back-end systems
B. The data model of the System APIs should make use of data types that approximately mirror those from the back-end systems
C. The data model should define its own naming convention, and not follow the same naming as the back-end systems
D. The System APIs should expose all back-end system fields
Explanation: When defining data models for System APIs without an established
Enterprise Data Model, MuleSoft recommends mirroring the back-end systems' data
types to achieve quick and effective integration without adding complexity. This approach
has several benefits:
What Mule application deployment scenario requires using Anypoint Platform Private Cloud Edition or Anypoint Platform for Pivotal Cloud Foundry?
A.
When it Is required to make ALL applications highly available across multiple data centers
B.
When it is required that ALL APIs are private and NOT exposed to the public cloud
C.
When regulatory requirements mandate on-premises processing of EVERY data item, including meta-data
D.
When ALL backend systems in the application network are deployed in the
organization's intranet
When regulatory requirements mandate on-premises processing of EVERY data item, including meta-data
Explanation: Explanation
Correct Answer: When regulatory requirements mandate on-premises processing of EVERY data item, including meta-data.
*****************************************
We need NOT require to use Anypoint Platform PCE or PCF for the below. So these
options are OUT.
>> We can make ALL applications highly available across multiple data centers using
CloudHub too.
>> We can use Anypoint VPN and tunneling from CloudHub to connect to ALL backend
systems in the application network that are deployed in the organization's intranet.
>> We can use Anypoint VPC and Firewall Rules to make ALL APIs private and NOT
exposed to the public cloud.
Only valid reason in the given options that requires to use Anypoint Platform PCE/ PCF is -
When regulatory requirements mandate on-premises processing of EVERY data item,
including meta-data
A set of tests must be performed prior to deploying API implementations to a staging
environment. Due to data security and access restrictions, untested APIs cannot be
granted access to the backend systems, so instead mocked data must be used for these
tests. The amount of available mocked data and its contents is sufficient to entirely test the
API implementations with no active connections to the backend systems. What type of
tests should be used to incorporate this mocked data?
A.
Integration tests
B.
Performance tests
C.
Functional tests (Blackbox)
D.
Unit tests (Whitebox)
Unit tests (Whitebox)
Explanation: Explanation
Correct Answer: Unit tests (Whitebox)
*****************************************
Reference: https://docs.mulesoft.com/mule-runtime/3.9/testing-strategies
As per general IT testing practice and MuleSoft recommended practice, Integration and
Performance tests should be done on full end to end setup for right evaluation. Which
means all end systems should be connected while doing the tests. So, these options are
OUT and we are left with Unit Tests and Functional Tests.
As per attached reference documentation from MuleSoft:
Unit Tests - are limited to the code that can be realistically exercised without the need to
run it inside Mule itself. So good candidates are Small pieces of modular code, Sub Flows,
Custom transformers, Custom components, Custom expression evaluators etc.
Functional Tests - are those that most extensively exercise your application configuration.
In these tests, you have the freedom and tools for simulating happy and unhappy paths.
You also have the possibility to create stubs for target services and make them success or
fail to easily simulate happy and unhappy paths respectively.
As the scenario in the question demands for API implementation to be tested before
deployment to Staging and also clearly indicates that there is enough/ sufficient amount of
mock data to test the various components of API implementations with no active
connections to the backend systems, Unit Tests are the one to be used to incorporate this
| Page 1 out of 19 Pages |