When could the API data model of a System API reasonably mimic the data model
exposed by the corresponding backend system, with minimal improvements over the
backend system's data model?
A.
When there is an existing Enterprise Data Model widely used across the organization
B.
When the System API can be assigned to a bounded context with a corresponding data
model
C.
When a pragmatic approach with only limited isolation from the backend system is deemed appropriate
D.
When the corresponding backend system is expected to be replaced in the near future
When a pragmatic approach with only limited isolation from the backend system is deemed appropriate
Explanation: Explanation
Correct Answer: When a pragmatic approach with only limited isolation from the backend
system is deemed appropriate.
*****************************************
General guidance w.r.t choosing Data Models:
>> If an Enterprise Data Model is in use then the API data model of System APIs should
make use of data types from that Enterprise Data Model and the corresponding API
implementation should translate between these data types from the Enterprise Data Model
and the native data model of the backend system.
>> If no Enterprise Data Model is in use then each System API should be assigned to a
Bounded Context, the API data model of System APIs should make use of data types from
the corresponding Bounded Context Data Model and the corresponding API
implementation should translate between these data types from the Bounded Context Data
Model and the native data model of the backend system. In this scenario, the data types in
the Bounded Context Data Model are defined purely in terms of their business
characteristics and are typically not related to the native data model of the backend system.
In other words, the translation effort may be significant.
>> If no Enterprise Data Model is in use, and the definition of a clean Bounded Context
Data Model is considered too much effort, then the API data model of System APIs should
make use of data types that approximately mirror those from the backend system, same
semantics and naming as backend system, lightly sanitized, expose all fields needed for
the given System API’s functionality, but not significantly more and making good use of
REST conventions.
The latter approach, i.e., exposing in System APIs an API data model that basically mirrors
that of the backend system, does not provide satisfactory isolation from backend systems
through the System API tier on its own. In particular, it will typically not be possible to
"swap out" a backend system without significantly changing all System APIs in front of that
backend system and therefore the API implementations of all Process APIs that depend on
those System APIs! This is so because it is not desirable to prolong the life of a previous
backend system’s data model in the form of the API data model of System APIs that now
front a new backend system. The API data models of System APIs following this approach
must therefore change when the backend system is replaced.
On the other hand:
>> It is a very pragmatic approach that adds comparatively little overhead over accessing
the backend system directly
>> Isolates API clients from intricacies of the backend system outside the data model
(protocol, authentication, connection pooling, network address, …)
>> Allows the usual API policies to be applied to System APIs
>> Makes the API data model for interacting with the backend system explicit and visible,
by exposing it in the RAML definitions of the System APIs
>> Further isolation from the backend system data model does occur in the API
What is true about where an API policy is defined in Anypoint Platform and how it is then applied to API instances?
A.
The API policy Is defined In Runtime Manager as part of the API deployment to a Mule
runtime, and then ONLY applied to the specific API Instance
B.
The API policy Is defined In API Manager for a specific API Instance, and then ONLY
applied to the specific API instance
C.
The API policy Is defined in API Manager and then automatically applied to ALL API instances
D.
The API policy is defined in API Manager, and then applied to ALL API instances in the
specified environment
The API policy Is defined In API Manager for a specific API Instance, and then ONLY
applied to the specific API instance
Explanation: Explanation
Correct Answer: The API policy is defined in API Manager for a specific API instance, and
then ONLY applied to the specific API instance.
*****************************************
>> Once our API specifications are ready and published to Exchange, we need to visit API
Manager and register an API instance for each API.
>> API Manager is the place where management of API aspects takes place like
addressing NFRs by enforcing policies on them.
>> We can create multiple instances for a same API and manage them differently for
different purposes.
>> One instance can have a set of API policies applied and another instance of same API
can have different set of policies applied for some other purpose.
>> These APIs and their instances are defined PER environment basis. So, one need to
manage them seperately in each environment.
>> We can ensure that same configuration of API instances (SLAs, Policies etc..) gets
promoted when promoting to higher environments using platform feature. But this is
optional only. Still one can change them per environment basis if they have to.
>> Runtime Manager is the place to manage API Implementations and their Mule Runtimes
but NOT APIs itself. Though API policies gets executed in Mule Runtimes, We CANNOT
enforce API policies in Runtime Manager. We would need to do that via API Manager only
for a cherry picked instance in an environment.
So, based on these facts, right statement in the given choices is - "The API policy is
defined in API Manager for a specific API instance, and then ONLY applied to the specific
API instance".
Reference: https://docs.mulesoft.com/api-manager/2.x/latest-overview-concept
An API is protected with a Client ID Enforcement policy and uses the default configuration. Access is requested for the client application to the API, and an approved contract now exists between the client application and the API. How can a consumer of this API avoid a 401 error "Unauthorized or invalid client application credentials"?
A. Send the obtained token as a header in every call
B. Send the obtained: client_id and client_secret in the request body
C. Send the obtained clent_id and clent_secret as URI parameters in every call
D. Send the obtained clent_id and client_secret in the header of every API Request call
Explanation:
When using the Client ID Enforcement policy with default settings,
MuleSoft expects the client_id and client_secret to be provided in the URI parameters of
each request. This policy is typically used to control and monitor access by validating that
each request has valid credentials. Here’s how to avoid a 401 Unauthorized error:
When can CloudHub Object Store v2 be used?
A. To store an unlimited number of key-value pairs
B. To store payloads with an average size greater than 15MB
C. To store information in Mule 4 Object Store v1
D. To store key-value pairs with keys up to 300 characters
Explanation: CloudHub Object Store v2 is a managed key-value store provided by
MuleSoft to support various use cases where temporary data storage is required. Here’s
why Option D is correct:
Key Length Support: Object Store v2 allows storage of keys with a length of up to
300 characters, making it suitable for applications needing flexible and descriptive
keys.
Limitations on Size:
Key-Value Limits: Object Store v2 is designed for moderate, transient storage
needs, and does not support unlimited storage. Thus, Option A is incorrect.
Backward Compatibility: Object Store v2 does not support Mule 4 applications
running Object Store v1. Option C is incorrect as Object Store v1 and v2 are
distinct.
A European company has customers all across Europe, and the IT department is migrating from an older platform to MuleSoft. The main requirements are that the new platform should allow redeployments with zero downtime and deployment of applications to multiple runtime versions, provide security and speed, and utilize Anypoint MQ as the message service. Which runtime plane should the company select based on the requirements without additional network configuration?
A. Runtime Fabric on VMs / Bare Metal for the runtime plane
B. Customer-hosted runtime plane
C. MuleSoft-hosted runtime plane (CloudHub)
D. Anypoint Runtime Fabric on Self-Managed Kubernetes for the runtime plane
Explanation:
For a European company with requirements such as zero-downtime
redeployment, deployment to multiple runtime versions, secure and fast
performance, and the use of Anypoint MQ without additional network configuration,
CloudHub is the best choice for the following reasons:
A set of tests must be performed prior to deploying API implementations to a staging
environment. Due to data security and access restrictions, untested APIs cannot be
granted access to the backend systems, so instead mocked data must be used for these
tests. The amount of available mocked data and its contents is sufficient to entirely test the
API implementations with no active connections to the backend systems. What type of
tests should be used to incorporate this mocked data?
A.
Integration tests
B.
Performance tests
C.
Functional tests (Blackbox)
D.
Unit tests (Whitebox)
Unit tests (Whitebox)
Explanation: Explanation
Correct Answer: Unit tests (Whitebox)
*****************************************
Reference: https://docs.mulesoft.com/mule-runtime/3.9/testing-strategies
As per general IT testing practice and MuleSoft recommended practice, Integration and
Performance tests should be done on full end to end setup for right evaluation. Which
means all end systems should be connected while doing the tests. So, these options are
OUT and we are left with Unit Tests and Functional Tests.
As per attached reference documentation from MuleSoft:
Unit Tests - are limited to the code that can be realistically exercised without the need to
run it inside Mule itself. So good candidates are Small pieces of modular code, Sub Flows,
Custom transformers, Custom components, Custom expression evaluators etc.
Functional Tests - are those that most extensively exercise your application configuration.
In these tests, you have the freedom and tools for simulating happy and unhappy paths.
You also have the possibility to create stubs for target services and make them success or
fail to easily simulate happy and unhappy paths respectively.
As the scenario in the question demands for API implementation to be tested before
deployment to Staging and also clearly indicates that there is enough/ sufficient amount of
mock data to test the various components of API implementations with no active
connections to the backend systems, Unit Tests are the one to be used to incorporate this
An organization makes a strategic decision to move towards an IT operating model that emphasizes consumption of reusable IT assets using modern APIs (as defined by MuleSoft). What best describes each modern API in relation to this new IT operating model?
A.
Each modern API has its own software development lifecycle, which reduces the need for documentation and automation
B.
Each modem API must be treated like a product and designed for a particular target audience (for instance, mobile app developers)
C.
Each modern API must be easy to consume, so should avoid complex authentication mechanisms such as SAML or JWT D
D.
Each modern API must be REST and HTTP based
Each modem API must be treated like a product and designed for a particular target audience (for instance, mobile app developers)
Explanation: Explanation
Correct Answers:
1. Each modern API must be treated like a product and designed for a particular target
audience (for instance mobile app developers)
*****************************************
An API implementation is deployed to CloudHub.
What conditions can be alerted on using the default Anypoint Platform functionality, where
the alert conditions depend on the end-to-end request processing of the API
implementation?
A.
When the API is invoked by an unrecognized API client
B.
When a particular API client invokes the API too often within a given time period
C.
When the response time of API invocations exceeds a threshold
D.
When the API receives a very high number of API invocations
When the response time of API invocations exceeds a threshold
Explanation: Explanation
Correct Answer: When the response time of API invocations exceeds a threshold
*****************************************
>> Alerts can be setup for all the given options using the default Anypoint Platform
functionality
>> However, the question insists on an alert whose conditions depend on the end-to-end
request processing of the API implementation.
>> Alert w.r.t "Response Times" is the only one which requires end-to-end request
processing of API implementation in order to determine if the threshold is exceeded or not.
Reference: https://docs.mulesoft.com/api-manager/2.x/using-api-alerts
| Page 1 out of 19 Pages |