A company requires Mule applications deployed to CloudHub to be isolated between nonproduction
and production environments. This is so Mule applications deployed to nonproduction
environments can only access backend systems running in their customerhosted
non-production environment, and so Mule applications deployed to production
environments can only access backend systems running in their customer-hosted
production environment. How does MuleSoft recommend modifying Mule applications,
configuring environments, or changing infrastructure to support this type of perenvironment
isolation between Mule applications and backend systems?
A.
Modify properties of Mule applications deployed to the production Anypoint Platform
environments to prevent access from non-production Mule applications
B.
Configure firewall rules in the infrastructure inside each customer-hosted environment so
that only IP addresses from the corresponding Anypoint Platform environments are allowed
to communicate with corresponding backend systems
C.
Create non-production and production environments in different Anypoint Platform
business groups
D.
Create separate Anypoint VPCs for non-production and production environments, then configure connections to the backend systems in the corresponding customer-hosted
environments
Create separate Anypoint VPCs for non-production and production environments, then configure connections to the backend systems in the corresponding customer-hosted
environments
Explanation: Explanation
Correct Answer: Create separate Anypoint VPCs for non-production and production
environments, then configure connections to the backend systems in the corresponding
customer-hosted environments.
*****************************************
>> Creating different Business Groups does NOT make any difference w.r.t accessing the
non-prod and prod customer-hosted environments. Still they will be accessing from both
Business Groups unless process network restrictions are put in place.
>> We need to modify or couple the Mule Application Implementations with the
environment. In fact, we should never implements application coupled with environments
by binding them in the properties. Only basic things like endpoint URL etc should be
bundled in properties but not environment level access restrictions.
>> IP addresses on CloudHub are dynamic until unless a special static addresses are
assigned. So it is not possible to setup firewall rules in customer-hosted infrastrcture. More
over, even if static IP addresses are assigned, there could be 100s of applications running
on cloudhub and setting up rules for all of them would be a hectic task, non-maintainable
and definitely got a good practice.
>> The best practice recommended by Mulesoft (In fact any cloud provider), is to have
your Anypoint VPCs seperated for Prod and Non-Prod and perform the VPC peering or
VPN tunneling for these Anypoint VPCs to respective Prod and Non-Prod customer-hosted
environment networks.
: https://docs.mulesoft.com/runtime-manager/virtual-private-cloud
Bottom of Form
Top of Form
What best describes the Fully Qualified Domain Names (FQDNs), also known as DNS entries, created when a Mule application is deployed to the CloudHub Shared Worker Cloud?
A.
A fixed number of FQDNs are created, IRRESPECTIVE of the environment and VPC design
B.
The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region
C.
The FQDNs are determined by the application name, but can be modified by an
administrator after deployment
D.
The FQDNs are determined by both the application name and the Anypoint Platform
organization
The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region
Explanation: Explanation
Correct Answer: The FQDNs are determined by the application name chosen,
IRRESPECTIVE of the region
*****************************************
>> When deploying applications to Shared Worker Cloud, the FQDN are always
determined by application name chosen.
>> It does NOT matter what region the app is being deployed to.
>> Although it is fact and true that the generated FQDN will have the region included in it
(Ex: exp-salesorder-api.au-s1.cloudhub.io), it does NOT mean that the same name can be
used when deploying to another CloudHub region.
>> Application name should be universally unique irrespective of Region and Organization
and solely determines the FQDN for Shared Load Balancers
An API implementation is updated. When must the RAML definition of the API also be updated?
A.
When the API implementation changes the structure of the request or response messages
B.
When the API implementation changes from interacting with a legacy backend system deployed on-premises to a modern, cloud-based (SaaS) system
C.
When the API implementation is migrated from an older to a newer version of the Mule runtime
D.
When the API implementation is optimized to improve its average response time
When the API implementation changes the structure of the request or response messages
Explanation: Explanation
Correct Answer: When the API implementation changes the structure of the request or
response messages
*****************************************
>> RAML definition usually needs to be touched only when there are changes in the
request/response schemas or in any traits on API.
>> It need not be modified for any internal changes in API implementation like performance
tuning, backend system migrations etc
An Order API must be designed that contains significant amounts of integration logic and
involves the invocation of the Product API.
The power relationship between Order API and Product API is one of "Customer/Supplier",
because the Product API is used heavily throughout the organization and is developed by a
dedicated development team located in the office of the CTO.
What strategy should be used to deal with the API data model of the Product API within the
Order API?
A.
Convince the development team of the Product API to adopt the API data model of the Order API such that the integration logic of the Order API can work with one consistent internal data model
B.
Work with the API data types of the Product API directly when implementing the integration logic of the Order API such that the Order API uses the same (unchanged) data types as the Product API
C.
Implement an anti-corruption layer in the Order API that transforms the Product API data
model into internal data types of the Order API
D.
Start an organization-wide data modeling initiative that will result in an Enterprise Data
Model that will then be used in both the Product API and the Order API
Implement an anti-corruption layer in the Order API that transforms the Product API data
model into internal data types of the Order API
Explanation: Explanation
Correct Answer: Convince the development team of the product API to adopt the API data
model of the Order API such that integration logic of the Order API can work with one
consistent internal data model
*****************************************
Key details to note from the given scenario:
>> Power relationship between Order API and Product API is customer/supplier
So, as per below rules of "Power Relationships", the caller (in this case Order API) would
request for features to the called (Product API team) and the Product API team would need
to accomodate those requests.
A set of tests must be performed prior to deploying API implementations to a staging
environment. Due to data security and access restrictions, untested APIs cannot be
granted access to the backend systems, so instead mocked data must be used for these
tests. The amount of available mocked data and its contents is sufficient to entirely test the
API implementations with no active connections to the backend systems. What type of
tests should be used to incorporate this mocked data?
A.
Integration tests
B.
Performance tests
C.
Functional tests (Blackbox)
D.
Unit tests (Whitebox)
Unit tests (Whitebox)
Explanation: Explanation
Correct Answer: Unit tests (Whitebox)
*****************************************
Reference: https://docs.mulesoft.com/mule-runtime/3.9/testing-strategies
As per general IT testing practice and MuleSoft recommended practice, Integration and
Performance tests should be done on full end to end setup for right evaluation. Which
means all end systems should be connected while doing the tests. So, these options are
OUT and we are left with Unit Tests and Functional Tests.
As per attached reference documentation from MuleSoft:
Unit Tests - are limited to the code that can be realistically exercised without the need to
run it inside Mule itself. So good candidates are Small pieces of modular code, Sub Flows,
Custom transformers, Custom components, Custom expression evaluators etc.
Functional Tests - are those that most extensively exercise your application configuration.
In these tests, you have the freedom and tools for simulating happy and unhappy paths.
You also have the possibility to create stubs for target services and make them success or
fail to easily simulate happy and unhappy paths respectively.
As the scenario in the question demands for API implementation to be tested before
deployment to Staging and also clearly indicates that there is enough/ sufficient amount of
mock data to test the various components of API implementations with no active
connections to the backend systems, Unit Tests are the one to be used to incorporate this
An organization uses various cloud-based SaaS systems and multiple on-premises
systems. The on-premises systems are an important part of the organization's application
network and can only be accessed from within the organization's intranet.
What is the best way to configure and use Anypoint Platform to support integrations with
both the cloud-based SaaS systems and on-premises systems?
A) Use CloudHub-deployed Mule runtimes in an Anypoint VPC managed by Anypoint
Platform Private Cloud Edition control plane
A.
Option A
B.
Option B
C.
Option C
D.
Option D
Option B
Explanation: •Explanation
Correct Answer: Use a combination of CloudHub-deployed and manually provisioned onpremises
Mule runtimes managed by the MuleSoft-hosted Platform control plane.
*****************************************
Key details to be taken from the given scenario:
>> Organization uses BOTH cloud-based and on-premises systems
>> On-premises systems can only be accessed from within the organization's intranet
Let us evaluate the given choices based on above key details:
>> CloudHub-deployed Mule runtimes can ONLY be controlled using MuleSoft-hosted
control plane. We CANNOT use Private Cloud Edition's control plane to control CloudHub
Mule Runtimes. So, option suggesting this is INVALID
>> Using CloudHub-deployed Mule runtimes in the shared worker cloud managed by the
MuleSoft-hosted Anypoint Platform is completely IRRELEVANT to given scenario and silly
choice. So, option suggesting this is INVALID
>> Using an on-premises installation of Mule runtimes that are completely isolated with NO
external network access, managed by the Anypoint Platform Private Cloud Edition control
plane would work for On-premises integrations. However, with NO external access,
integrations cannot be done to SaaS-based apps. Moreover CloudHub-hosted apps are
best-fit for integrating with SaaS-based applications. So, option suggesting this is BEST
WAY.
The best way to configure and use Anypoint Platform to support these mixed/hybrid
integrations is to use a combination of CloudHub-deployed and manually provisioned onpremises
Mule runtimes managed by the MuleSoft-hosted Platform control plane.
What do the API invocation metrics provided by Anypoint Platform provide?
A.
ROI metrics from APIs that can be directly shared with business users
B.
Measurements of the effectiveness of the application network based on the level of reuse
C.
Data on past API invocations to help identify anomalies and usage patterns across various APIs
D.
Proactive identification of likely future policy violations that exceed a given threat
threshold
Data on past API invocations to help identify anomalies and usage patterns across various APIs
Explanation: Explanation
Correct Answer: Data on past API invocations to help identify anomalies and usage
patterns across various APIs
*****************************************
API Invocation metrics provided by Anypoint Platform:
>> Does NOT provide any Return Of Investment (ROI) related information. So the option
suggesting it is OUT.
>> Does NOT provide any information w.r.t how APIs are reused, whether there is effective
usage of APIs or not etc...
>> Does NOT prodive any prediction information as such to help us proactively identify any
future policy violations.
So, the kind of data/information we can get from such metrics is on past API invocations to
help identify anomalies and usage patterns across various APIs.
Reference:
https://usermanual.wiki/Document/APAAppNetstudentManual02may2018.991784750.pdf
A retail company with thousands of stores has an API to receive data about purchases and
insert it into a single database. Each individual store sends a batch of purchase data to the
API about every 30 minutes. The API implementation uses a database bulk insert
command to submit all the purchase data to a database using a custom JDBC driver
provided by a data analytics solution provider. The API implementation is deployed to a
single CloudHub worker. The JDBC driver processes the data into a set of several
temporary disk files on the CloudHub worker, and then the data is sent to an analytics
engine using a proprietary protocol. This process usually takes less than a few minutes.
Sometimes a request fails. In this case, the logs show a message from the JDBC driver
indicating an out-of-file-space message. When the request is resubmitted, it is successful.
What is the best way to try to resolve this throughput issue?
A.
se a CloudHub autoscaling policy to add CloudHub workers
B.
Use a CloudHub autoscaling policy to increase the size of the CloudHub worker
C.
Increase the size of the CloudHub worker(s)
D.
Increase the number of CloudHub workers
Increase the number of CloudHub workers
Explanation: Explanation
Correct Answer: Increase the size of the CloudHub worker(s)
*****************************************
The key details that we can take out from the given scenario are:
>> API implementation uses a database bulk insert command to submit all the purchase
data to a database
>> JDBC driver processes the data into a set of several temporary disk files on the
CloudHub worker
>> Sometimes a request fails and the logs show a message indicating an out-of-file-space
message
Based on above details:
>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules
based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages
and not due to some given error or disk space issues.
>> Increasing the number of CloudHub workers also does NOT help here because the
reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to
disk-space.
>> Moreover, the API is doing bulk insert to submit the received batch data. Which means,
all data is handled by ONE worker only at a time. So, the disk space issue should be
tackled on "per worker" basis. Having multiple workers does not help as the batch may still
fail on any worker when disk is out of space on that particular worker.
Therefore, the right way to deal this issue and resolve this is to increase the vCore size of
the worker so that a new worker with more disk space will be provisioned.
| Page 1 out of 19 Pages |