True or False. We should always make sure that the APIs being designed and developed are self-servable even if it needs more man-day effort and resources.
A.
FALSE
B.
TRUE
TRUE
Explanation: Explanation
Correct Answer: TRUE
*****************************************
>> As per MuleSoft proposed IT Operating Model, designing APIs and making sure that
they are discoverable and self-servable is VERY VERY IMPORTANT and decides the
success of an API and its application network.
What condition requires using a CloudHub Dedicated Load Balancer?
A.
When cross-region load balancing is required between separate deployments of the same Mule application
B.
When custom DNS names are required for API implementations deployed to customerhosted Mule runtimes
C.
When API invocations across multiple CloudHub workers must be load balanced
D.
When server-side load-balanced TLS mutual authentication is required between API
implementations and API clients
When server-side load-balanced TLS mutual authentication is required between API
implementations and API clients
Explanation: Explanation
Correct Answer: When server-side load-balanced TLS mutual authentication is required
between API implementations and API clients
*****************************************
Fact/ Memory Tip: Although there are many benefits of CloudHub Dedicated Load
balancer, TWO important things that should come to ones mind for considering it are:
>> Having URL endpoints with Custom DNS names on CloudHub deployed apps
>> Configuring custom certificates for both HTTPS and Two-way (Mutual) authentication.
Coming to the options provided for this question:
>> We CANNOT use DLB to perform cross-region load balancing between separate
deployments of the same Mule application.
>> We can have mapping rules to have more than one DLB URL pointing to same Mule
app. But vicevera (More than one Mule app having same DLB URL) is NOT POSSIBLE
>> It is true that DLB helps to setup custom DNS names for Cloudhub deployed Mule apps
but NOT true for apps deployed to Customer-hosted Mule Runtimes.
>> It is true to that we can load balance API invocations across multiple CloudHub workers
using DLB but it is NOT A MUST. We can achieve the same (load balancing) using SLB
(Shared Load Balancer) too. We DO NOT necessarily require DLB for achieve it.
So the only right option that fits the scenario and requires us to use DLB is when TLS
mutual authentication is required between API implementations and API clients.
Reference: https://docs.mulesoft.com/runtime-manager/cloudhub-dedicated-load-balancer
What best describes the Fully Qualified Domain Names (FQDNs), also known as DNS entries, created when a Mule application is deployed to the CloudHub Shared Worker Cloud?
A.
A fixed number of FQDNs are created, IRRESPECTIVE of the environment and VPC design
B.
The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region
C.
The FQDNs are determined by the application name, but can be modified by an
administrator after deployment
D.
The FQDNs are determined by both the application name and the Anypoint Platform
organization
The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region
Explanation: Explanation
Correct Answer: The FQDNs are determined by the application name chosen,
IRRESPECTIVE of the region
*****************************************
>> When deploying applications to Shared Worker Cloud, the FQDN are always
determined by application name chosen.
>> It does NOT matter what region the app is being deployed to.
>> Although it is fact and true that the generated FQDN will have the region included in it
(Ex: exp-salesorder-api.au-s1.cloudhub.io), it does NOT mean that the same name can be
used when deploying to another CloudHub region.
>> Application name should be universally unique irrespective of Region and Organization
and solely determines the FQDN for Shared Load Balancers
Which APIs can be used with DataGraph to create a unified schema?

A. APIs 1, 3, 5
B. APIs 2, 4 ,6
C. APIs 1, 2, s5, 6
D. APIs 1, 2, 3, 4
Explanation:
To create a unified schema in MuleSoft's DataGraph, APIs must be exposed
in a way that allows DataGraph to pull and consolidate data from these APIs into a single
schema accessible to consumers. DataGraph provides a federated approach, combining
multiple APIs to form a single, unified API endpoint.
In this setup:
APIs 1, 2, 3, and 4 are suitable candidates for DataGraph because they are hosted
within the Customer VPC on CloudHub and are accessible either through a
Shared Load Balancer (LB) or a Dedicated Load Balancer (DLB). Both of these
load balancers provide public access, which is a necessary condition for
DataGraph as it must access the APIs to aggregate data.
APIs 5 and 6 are hosted on Customer Hosted Server 2, which is explicitly marked
as "Not public". Since DataGraph requires API access through a publicly
reachable endpoint to aggregate them into a unified schema, APIs 5 and 6 cannot
be used with DataGraph in this configuration.
APIs 3 and 4 on Customer Hosted Server 1 appear accessible through a Shared
LB, implying public accessibility that meets DataGraph’s requirements.
By combining APIs 1, 2, 3, and 4 within DataGraph, you can create a unified schema that
enables clients to query data seamlessly from all these APIs as if it were from a single
source.
This setup allows for efficient data retrieval and can simplify API consumption by reducing
the need to call multiple APIs individually, thus optimizing performance and developer
experience.
Refer to the exhibit.

A. Option A
B. Option B
C. Option C
D. Option D
Explanation:
An established communications company is beginning its API-led connectivity journey, The
company has been using a successful Enterprise Data Model for many years. The company has identified a self-service account management app as the first effort for APIled,
and it has identified the following APIs.
A. Customer SAPI
B. Customer Lookup PAPI
C. Mobile Account Management EAPI
D. Service SAPI
Explanation: In the API-led connectivity approach, APIs are categorized into Experience,
Process, and System layers:
Enterprise Data Model Scope:
Why Option C is Correct:
Explanation of Incorrect Options:
References:
For additional guidance, review MuleSoft's best practices on API-led
connectivity and data modeling.
In an organization, the InfoSec team is investigating Anypoint Platform related data traffic. From where does most of the data available to Anypoint Platform for monitoring and alerting originate?
A.
From the Mule runtime or the API implementation, depending on the deployment model
B.
From various components of Anypoint Platform, such as the Shared Load Balancer, VPC, and Mule runtimes
C.
From the Mule runtime or the API Manager, depending on the type of data
D.
From the Mule runtime irrespective of the deployment model
From the Mule runtime irrespective of the deployment model
Explanation: Explanation
Correct Answer: From the Mule runtime irrespective of the deployment model
*****************************************
>> Monitoring and Alerting metrics are always originated from Mule Runtimes irrespective
of the deployment model.
>> It may seems that some metrics (Runtime Manager) are originated from Mule Runtime
and some are (API Invocations/ API Analytics) from API Manager. However, this is
realistically NOT TRUE. The reason is, API manager is just a management tool for API
instances but all policies upon applying on APIs eventually gets executed on Mule
Runtimes only (Either Embedded or API Proxy).
>> Similarly all API Implementations also run on Mule Runtimes.
So, most of the day required for monitoring and alerts are originated fron Mule Runtimes
only irrespective of whether the deployment model is MuleSoft-hosted or Customer-hosted
or Hybrid.
A retail company with thousands of stores has an API to receive data about purchases and
insert it into a single database. Each individual store sends a batch of purchase data to the
API about every 30 minutes. The API implementation uses a database bulk insert
command to submit all the purchase data to a database using a custom JDBC driver
provided by a data analytics solution provider. The API implementation is deployed to a
single CloudHub worker. The JDBC driver processes the data into a set of several
temporary disk files on the CloudHub worker, and then the data is sent to an analytics
engine using a proprietary protocol. This process usually takes less than a few minutes.
Sometimes a request fails. In this case, the logs show a message from the JDBC driver
indicating an out-of-file-space message. When the request is resubmitted, it is successful.
What is the best way to try to resolve this throughput issue?
A.
se a CloudHub autoscaling policy to add CloudHub workers
B.
Use a CloudHub autoscaling policy to increase the size of the CloudHub worker
C.
Increase the size of the CloudHub worker(s)
D.
Increase the number of CloudHub workers
Increase the number of CloudHub workers
Explanation: Explanation
Correct Answer: Increase the size of the CloudHub worker(s)
*****************************************
The key details that we can take out from the given scenario are:
>> API implementation uses a database bulk insert command to submit all the purchase
data to a database
>> JDBC driver processes the data into a set of several temporary disk files on the
CloudHub worker
>> Sometimes a request fails and the logs show a message indicating an out-of-file-space
message
Based on above details:
>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules
based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages
and not due to some given error or disk space issues.
>> Increasing the number of CloudHub workers also does NOT help here because the
reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to
disk-space.
>> Moreover, the API is doing bulk insert to submit the received batch data. Which means,
all data is handled by ONE worker only at a time. So, the disk space issue should be
tackled on "per worker" basis. Having multiple workers does not help as the batch may still
fail on any worker when disk is out of space on that particular worker.
Therefore, the right way to deal this issue and resolve this is to increase the vCore size of
the worker so that a new worker with more disk space will be provisioned.
| Page 1 out of 19 Pages |