A company has created a successful enterprise data model (EDM). The company is
committed to building an application network by adopting modern APIs as a core enabler of
the company's IT operating model. At what API tiers (experience, process, system) should
the company require reusing the EDM when designing modern API data models?
A.
At the experience and process tiers
B.
At the experience and system tiers
C.
At the process and system tiers
D.
At the experience, process, and system tiers
At the process and system tiers
Explanation: Explanation Correct Answer: At the process and system tiers
*****************************************
>> Experience Layer APIs are modeled and designed exclusively for the end user's
experience. So, the data models of experience layer vary based on the nature and type of
such API consumer. For example, Mobile consumers will need light-weight data models to
transfer with ease on the wire, where as web-based consumers will need detailed data
models to render most of the info on web pages, so on. So, enterprise data models fit for
the purpose of canonical models but not of good use for experience APIs.
>> That is why, EDMs should be used extensively in process and system tiers but NOT in
experience tier.
Once an API Implementation is ready and the API is registered on API Manager, who should request the access to the API on Anypoint Exchange?
A.
None
B.
Both
C.
API Client
D.
API Consumer
API Consumer
Explanation: Explanation
Correct Answer: API Consumer
*****************************************
>> API clients are piece of code or programs that use the client credentials of API
consumer but does not directly interact with Anypoint Exchange to get the access
>> API consumer is the one who should get registered and request access to API and then
API client needs to use those client credentials to hit the APIs
So, API consumer is the one who needs to request access on the API from Anypoint
Exchange
A retail company with thousands of stores has an API to receive data about purchases and
insert it into a single database. Each individual store sends a batch of purchase data to the
API about every 30 minutes. The API implementation uses a database bulk insert
command to submit all the purchase data to a database using a custom JDBC driver
provided by a data analytics solution provider. The API implementation is deployed to a
single CloudHub worker. The JDBC driver processes the data into a set of several
temporary disk files on the CloudHub worker, and then the data is sent to an analytics
engine using a proprietary protocol. This process usually takes less than a few minutes.
Sometimes a request fails. In this case, the logs show a message from the JDBC driver
indicating an out-of-file-space message. When the request is resubmitted, it is successful.
What is the best way to try to resolve this throughput issue?
A.
se a CloudHub autoscaling policy to add CloudHub workers
B.
Use a CloudHub autoscaling policy to increase the size of the CloudHub worker
C.
Increase the size of the CloudHub worker(s)
D.
Increase the number of CloudHub workers
Increase the number of CloudHub workers
Explanation: Explanation
Correct Answer: Increase the size of the CloudHub worker(s)
*****************************************
The key details that we can take out from the given scenario are:
>> API implementation uses a database bulk insert command to submit all the purchase
data to a database
>> JDBC driver processes the data into a set of several temporary disk files on the
CloudHub worker
>> Sometimes a request fails and the logs show a message indicating an out-of-file-space
message
Based on above details:
>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules
based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages
and not due to some given error or disk space issues.
>> Increasing the number of CloudHub workers also does NOT help here because the
reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to
disk-space.
>> Moreover, the API is doing bulk insert to submit the received batch data. Which means,
all data is handled by ONE worker only at a time. So, the disk space issue should be
tackled on "per worker" basis. Having multiple workers does not help as the batch may still
fail on any worker when disk is out of space on that particular worker.
Therefore, the right way to deal this issue and resolve this is to increase the vCore size of
the worker so that a new worker with more disk space will be provisioned.
A set of tests must be performed prior to deploying API implementations to a staging
environment. Due to data security and access restrictions, untested APIs cannot be
granted access to the backend systems, so instead mocked data must be used for these
tests. The amount of available mocked data and its contents is sufficient to entirely test the
API implementations with no active connections to the backend systems. What type of
tests should be used to incorporate this mocked data?
A.
Integration tests
B.
Performance tests
C.
Functional tests (Blackbox)
D.
Unit tests (Whitebox)
Unit tests (Whitebox)
Explanation: Explanation
Correct Answer: Unit tests (Whitebox)
*****************************************
Reference: https://docs.mulesoft.com/mule-runtime/3.9/testing-strategies
As per general IT testing practice and MuleSoft recommended practice, Integration and
Performance tests should be done on full end to end setup for right evaluation. Which
means all end systems should be connected while doing the tests. So, these options are
OUT and we are left with Unit Tests and Functional Tests.
As per attached reference documentation from MuleSoft:
Unit Tests - are limited to the code that can be realistically exercised without the need to
run it inside Mule itself. So good candidates are Small pieces of modular code, Sub Flows,
Custom transformers, Custom components, Custom expression evaluators etc.
Functional Tests - are those that most extensively exercise your application configuration.
In these tests, you have the freedom and tools for simulating happy and unhappy paths.
You also have the possibility to create stubs for target services and make them success or
fail to easily simulate happy and unhappy paths respectively.
As the scenario in the question demands for API implementation to be tested before
deployment to Staging and also clearly indicates that there is enough/ sufficient amount of
mock data to test the various components of API implementations with no active
connections to the backend systems, Unit Tests are the one to be used to incorporate this
An organization uses various cloud-based SaaS systems and multiple on-premises
systems. The on-premises systems are an important part of the organization's application
network and can only be accessed from within the organization's intranet.
What is the best way to configure and use Anypoint Platform to support integrations with
both the cloud-based SaaS systems and on-premises systems?
A) Use CloudHub-deployed Mule runtimes in an Anypoint VPC managed by Anypoint
Platform Private Cloud Edition control plane
A.
Option A
B.
Option B
C.
Option C
D.
Option D
Option B
Explanation: •Explanation
Correct Answer: Use a combination of CloudHub-deployed and manually provisioned onpremises
Mule runtimes managed by the MuleSoft-hosted Platform control plane.
*****************************************
Key details to be taken from the given scenario:
>> Organization uses BOTH cloud-based and on-premises systems
>> On-premises systems can only be accessed from within the organization's intranet
Let us evaluate the given choices based on above key details:
>> CloudHub-deployed Mule runtimes can ONLY be controlled using MuleSoft-hosted
control plane. We CANNOT use Private Cloud Edition's control plane to control CloudHub
Mule Runtimes. So, option suggesting this is INVALID
>> Using CloudHub-deployed Mule runtimes in the shared worker cloud managed by the
MuleSoft-hosted Anypoint Platform is completely IRRELEVANT to given scenario and silly
choice. So, option suggesting this is INVALID
>> Using an on-premises installation of Mule runtimes that are completely isolated with NO
external network access, managed by the Anypoint Platform Private Cloud Edition control
plane would work for On-premises integrations. However, with NO external access,
integrations cannot be done to SaaS-based apps. Moreover CloudHub-hosted apps are
best-fit for integrating with SaaS-based applications. So, option suggesting this is BEST
WAY.
The best way to configure and use Anypoint Platform to support these mixed/hybrid
integrations is to use a combination of CloudHub-deployed and manually provisioned onpremises
Mule runtimes managed by the MuleSoft-hosted Platform control plane.
How can the application of a rate limiting API policy be accurately reflected in the RAML definition of an API?
A.
By refining the resource definitions by adding a description of the rate limiting policy behavior
B.
By refining the request definitions by adding a remaining Requests query parameter with description, type, and example
C.
By refining the response definitions by adding the out-of-the-box Anypoint Platform ratelimit-
enforcement securityScheme with description, type, and example
D.
By refining the response definitions by adding the x-ratelimit-* response headers with
description, type, and example
By refining the response definitions by adding the x-ratelimit-* response headers with
description, type, and example
Explanation: Explanation
Correct Answer: By refining the response definitions by adding the x-ratelimit-* response
headers with description, type, and example
*****************************************
Traffic is routed through an API proxy to an API implementation. The API proxy is managed
by API Manager and the API implementation is deployed to a CloudHub VPC using
Runtime Manager. API policies have been applied to this API. In this deployment scenario,
at what point are the API policies enforced on incoming API client requests?
A.
At the API proxy
B.
At the API implementation
C.
At both the API proxy and the API implementation
D.
At a MuleSoft-hosted load balancer
At the API proxy
Explanation: Explanation
Correct Answer: At the API proxy
*****************************************
>> API Policies can be enforced at two places in Mule platform.
>> One - As an Embedded Policy enforcement in the same Mule Runtime where API
implementation is running.
>> Two - On an API Proxy sitting in front of the Mule Runtime where API implementation is
running.
>> As the deployment scenario in the question has API Proxy involved, the policies will be
enforced at the API Proxy.
What is true about where an API policy is defined in Anypoint Platform and how it is then applied to API instances?
A.
The API policy Is defined In Runtime Manager as part of the API deployment to a Mule
runtime, and then ONLY applied to the specific API Instance
B.
The API policy Is defined In API Manager for a specific API Instance, and then ONLY
applied to the specific API instance
C.
The API policy Is defined in API Manager and then automatically applied to ALL API instances
D.
The API policy is defined in API Manager, and then applied to ALL API instances in the
specified environment
The API policy Is defined In API Manager for a specific API Instance, and then ONLY
applied to the specific API instance
Explanation: Explanation
Correct Answer: The API policy is defined in API Manager for a specific API instance, and
then ONLY applied to the specific API instance.
*****************************************
>> Once our API specifications are ready and published to Exchange, we need to visit API
Manager and register an API instance for each API.
>> API Manager is the place where management of API aspects takes place like
addressing NFRs by enforcing policies on them.
>> We can create multiple instances for a same API and manage them differently for
different purposes.
>> One instance can have a set of API policies applied and another instance of same API
can have different set of policies applied for some other purpose.
>> These APIs and their instances are defined PER environment basis. So, one need to
manage them seperately in each environment.
>> We can ensure that same configuration of API instances (SLAs, Policies etc..) gets
promoted when promoting to higher environments using platform feature. But this is
optional only. Still one can change them per environment basis if they have to.
>> Runtime Manager is the place to manage API Implementations and their Mule Runtimes
but NOT APIs itself. Though API policies gets executed in Mule Runtimes, We CANNOT
enforce API policies in Runtime Manager. We would need to do that via API Manager only
for a cherry picked instance in an environment.
So, based on these facts, right statement in the given choices is - "The API policy is
defined in API Manager for a specific API instance, and then ONLY applied to the specific
API instance".
Reference: https://docs.mulesoft.com/api-manager/2.x/latest-overview-concept
Page 1 out of 12 Pages |