What condition requires using a CloudHub Dedicated Load Balancer?
A.
When cross-region load balancing is required between separate deployments of the same Mule application
B.
When custom DNS names are required for API implementations deployed to customerhosted Mule runtimes
C.
When API invocations across multiple CloudHub workers must be load balanced
D.
When server-side load-balanced TLS mutual authentication is required between API
implementations and API clients
When server-side load-balanced TLS mutual authentication is required between API
implementations and API clients
Explanation: Explanation
Correct Answer: When server-side load-balanced TLS mutual authentication is required
between API implementations and API clients
*****************************************
Fact/ Memory Tip: Although there are many benefits of CloudHub Dedicated Load
balancer, TWO important things that should come to ones mind for considering it are:
>> Having URL endpoints with Custom DNS names on CloudHub deployed apps
>> Configuring custom certificates for both HTTPS and Two-way (Mutual) authentication.
Coming to the options provided for this question:
>> We CANNOT use DLB to perform cross-region load balancing between separate
deployments of the same Mule application.
>> We can have mapping rules to have more than one DLB URL pointing to same Mule
app. But vicevera (More than one Mule app having same DLB URL) is NOT POSSIBLE
>> It is true that DLB helps to setup custom DNS names for Cloudhub deployed Mule apps
but NOT true for apps deployed to Customer-hosted Mule Runtimes.
>> It is true to that we can load balance API invocations across multiple CloudHub workers
using DLB but it is NOT A MUST. We can achieve the same (load balancing) using SLB
(Shared Load Balancer) too. We DO NOT necessarily require DLB for achieve it.
So the only right option that fits the scenario and requires us to use DLB is when TLS
mutual authentication is required between API implementations and API clients.
Reference: https://docs.mulesoft.com/runtime-manager/cloudhub-dedicated-load-balancer
An eCommerce company is adding a new Product Details feature to their website, A customer will launch the product catalog page, a new Product Details link will appear by product where they can click to retrieve the product detail description. Product detail data is updated with product update releases, once or twice a year, Presently the database response time has been very slow due to high volume. What action retrieves the product details with the lowest response time, fault tolerant, and consistent data?
A. Select the product details from a database in a Cache scope and return them within the API response
B. Select the product details from a database and put them in Anypoint MQ; the Anypoint MO subseriber will receive the product details and return them within the API response
C. Use an object store to store and retrieve the product details originally read from a database and return them within the API response
D. Select the product details from a database and return them within the API response
Mule applications that implement a number of REST APIs are deployed to their own subnet
that is inaccessible from outside the organization.
External business-partners need to access these APIs, which are only allowed to be
invoked from a separate subnet dedicated to partners - called Partner-subnet. This subnet
is accessible from the public internet, which allows these external partners to reach it.
Anypoint Platform and Mule runtimes are already deployed in Partner-subnet. These Mule
runtimes can already access the APIs.
What is the most resource-efficient solution to comply with these requirements, while
having the least impact on other applications that are currently using the APIs?
A.
Implement (or generate) an API proxy Mule application for each of the APIs, then deploy the API proxies to the Mule runtimes
B.
Redeploy the API implementations to the same servers running the Mule runtimes
C.
Add an additional endpoint to each API for partner-enablement consumption
D.
Duplicate the APIs as Mule applications, then deploy them to the Mule runtimes
Implement (or generate) an API proxy Mule application for each of the APIs, then deploy the API proxies to the Mule runtimes
What correctly characterizes unit tests of Mule applications?
A.
They test the validity of input and output of source and target systems
B.
They must be run in a unit testing environment with dedicated Mule runtimes for the environment
C.
They must be triggered by an external client tool or event source
D.
They are typically written using MUnit to run in an embedded Mule runtime that does not require external connectivity
They are typically written using MUnit to run in an embedded Mule runtime that does not require external connectivity
Explanation: Explanation
Correct Answer: They are typically written using MUnit to run in an embedded Mule runtime
that does not require external connectivity.
*****************************************
Below TWO are characteristics of Integration Tests but NOT unit tests:
>> They test the validity of input and output of source and target systems.
>> They must be triggered by an external client tool or event source.
It is NOT TRUE that Unit Tests must be run in a unit testing environment with dedicated
Mule runtimes for the environment.
MuleSoft offers MUnit for writing Unit Tests and they run in an embedded Mule Runtime
without needing any separate/ dedicated Runtimes to execute them. They also do NOT
need any external connectivity as MUnit supports mocking via stubs.
https://dzone.com/articles/munit-framework
4 Production environment is running on a dedicated Virtual Private Cloud (VPC) on CloudHub 1,0, and the security team guidelines clearly state no traffic on HTTP. Which two options support these security guidelines?

A. Option A
B. Option B
C. Option C
D. Option D
E. Option E
A retail company with thousands of stores has an API to receive data about purchases and
insert it into a single database. Each individual store sends a batch of purchase data to the
API about every 30 minutes. The API implementation uses a database bulk insert
command to submit all the purchase data to a database using a custom JDBC driver
provided by a data analytics solution provider. The API implementation is deployed to a
single CloudHub worker. The JDBC driver processes the data into a set of several
temporary disk files on the CloudHub worker, and then the data is sent to an analytics
engine using a proprietary protocol. This process usually takes less than a few minutes.
Sometimes a request fails. In this case, the logs show a message from the JDBC driver
indicating an out-of-file-space message. When the request is resubmitted, it is successful.
What is the best way to try to resolve this throughput issue?
A.
se a CloudHub autoscaling policy to add CloudHub workers
B.
Use a CloudHub autoscaling policy to increase the size of the CloudHub worker
C.
Increase the size of the CloudHub worker(s)
D.
Increase the number of CloudHub workers
Increase the number of CloudHub workers
Explanation: Explanation
Correct Answer: Increase the size of the CloudHub worker(s)
*****************************************
The key details that we can take out from the given scenario are:
>> API implementation uses a database bulk insert command to submit all the purchase
data to a database
>> JDBC driver processes the data into a set of several temporary disk files on the
CloudHub worker
>> Sometimes a request fails and the logs show a message indicating an out-of-file-space
message
Based on above details:
>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules
based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages
and not due to some given error or disk space issues.
>> Increasing the number of CloudHub workers also does NOT help here because the
reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to
disk-space.
>> Moreover, the API is doing bulk insert to submit the received batch data. Which means,
all data is handled by ONE worker only at a time. So, the disk space issue should be
tackled on "per worker" basis. Having multiple workers does not help as the batch may still
fail on any worker when disk is out of space on that particular worker.
Therefore, the right way to deal this issue and resolve this is to increase the vCore size of
the worker so that a new worker with more disk space will be provisioned.
Refer to the exhibit.
A developer is building a client application to invoke an API deployed to the STAGING
environment that is governed by a client ID enforcement policy.
What is required to successfully invoke the API?
A.
The client ID and secret for the Anypoint Platform account owning the API in the STAGING environment
B.
The client ID and secret for the Anypoint Platform account's STAGING environment
C.
The client ID and secret obtained from Anypoint Exchange for the API instance in the
STAGING environment
D.
A valid OAuth token obtained from Anypoint Platform and its associated client ID and
secret
The client ID and secret obtained from Anypoint Exchange for the API instance in the
STAGING environment
Explanation: Explanation
Correct Answer: The client ID and secret obtained from Anypoint Exchange for the API
instance in the STAGING environment
*****************************************
>> We CANNOT use the client ID and secret of Anypoint Platform account or any individual
environments for accessing the APIs
>> As the type of policy that is enforced on the API in question is "Client ID Enforcment
Policy", OAuth token based access won't work.
Right way to access the API is to use the client ID and secret obtained from Anypoint
Exchange for the API instance in a particular environment we want to work on.
References:
Managing API instance Contracts on API Manager
https://docs.mulesoft.com/api-manager/1.x/request-access-to-api-task
https://docs.mulesoft.com/exchange/to-request-access
https://docs.mulesoft.com/api-manager/2.x/policy-mule3-client-id-based-policies
A company deploys Mule applications with default configurations through Runtime Manager to customer-hosted Mule runtimes. Each Mule application is an API implementation that exposes RESTful interfaces to API clients. The Mule runtimes are managed by the MuleSoft-hosted control plane. The payload is never used by any Logger components. When an API client sends an HTTP request to a customer-hosted Mule application, which metadata or data (payload) is pushed to the MuleSoft-hosted control plane?
A. Only the data
B. No data
C. The data and metadata
D. Only the metadata
| Page 1 out of 19 Pages |