An organization wants to make sure only known partners can invoke the organization's
APIs. To achieve this security goal, the organization wants to enforce a Client ID
Enforcement policy in API Manager so that only registered partner applications can invoke
the organization's APIs. In what type of API implementation does MuleSoft recommend
adding an API proxy to enforce the Client ID Enforcement policy, rather than embedding
the policy directly in the application's JVM?
A.
A Mule 3 application using APIkit
B.
A Mule 3 or Mule 4 application modified with custom Java code
C.
A Mule 4 application with an API specification
D.
A Non-Mule application
A Non-Mule application
Explanation: Explanation
Correct Answer: A Non-Mule application
*****************************************
>> All type of Mule applications (Mule 3/ Mule 4/ with APIkit/ with Custom Java Code etc)
running on Mule Runtimes support the Embedded Policy Enforcement on them.
>> The only option that cannot have or does not support embedded policy enforcement
and must have API Proxy is for Non-Mule Applications.
So, Non-Mule application is the right answer
A Platform Architect inherits a legacy monolithic SOAP-based web service that performs a number of tasks, including showing all policies belonging to a client. The service connects to two back-end systems — a life-insurance administration system and a general-insurance administration system — and then queries for insurance policy information within each system, aggregates the results, and presents a SOAP-based response to a user interface (UI). The architect wants to break up the monolithic web service to follow API-led conventions. Which part of the service should be put into the process layer?
A. Combining the insurance policy information from the administration systems
B. Presenting the SOAP-based response to the UI
C. Authenticating and maintaining connections to each of the back-end administration systems
D. Querying the data from the administration systems
Explanation:
In the API-led connectivity approach, each layer (System, Process, and
Experience) has a distinct purpose:
A REST API is being designed to implement a Mule application.
What standard interface definition language can be used to define REST APIs?
A.
Web Service Definition Language(WSDL)
B.
OpenAPI Specification (OAS)
C.
YAML
D.
AsyncAPI Specification
OpenAPI Specification (OAS)
What best explains the use of auto-discovery in API implementations?
A. It makes API Manager aware of API implementations and hence enables it to enforce policies
B. It enables Anypoint Studio to discover API definitions configured in Anypoint Platform
C. It enables Anypoint Exchange to discover assets and makes them available for reuse
D. It enables Anypoint Analytics to gain insight into the usage of APIs
Explanation: Explanation
Correct Answer: It makes API Manager aware of API implementations and hence enables it
to enforce policies.
*****************************************
>> API Autodiscovery is a mechanism that manages an API from API Manager by pairing
the deployed application to an API created on the platform.
>> API Management includes tracking, enforcing policies if you apply any, and reporting
API analytics.
>> Critical to the Autodiscovery process is identifying the API by providing the API name
and version.
References:
https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept
https://docs.mulesoft.com/api-manager/1.x/api-auto-discovery
https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept
A Mule application implements an API. The Mule application has an HTTP Listener whose connector configuration sets the HTTPS protocol and hard-codes the port value. The Mule application is deployed to an Anypoint VPC and uses the CloudHub 1.0 Shared Load Balancer (SLB) for all incoming traffic. Which port number must be assigned to the HTTP Listener's connector configuration so that the Mule application properly receives HTTPS API invocations routed through the SLB?
A. 8082
B. 8092
C. 80
D. 443
Explanation:
When using CloudHub 1.0’s Shared Load Balancer (SLB) for a Mule
application configured with HTTPS in an Anypoint VPC, specific ports must be configured
for the application to correctly route incoming traffic:
An organization uses various cloud-based SaaS systems and multiple on-premises
systems. The on-premises systems are an important part of the organization's application
network and can only be accessed from within the organization's intranet.
What is the best way to configure and use Anypoint Platform to support integrations with
both the cloud-based SaaS systems and on-premises systems?
A) Use CloudHub-deployed Mule runtimes in an Anypoint VPC managed by Anypoint
Platform Private Cloud Edition control plane
A.
Option A
B.
Option B
C.
Option C
D.
Option D
Option B
Explanation: •Explanation
Correct Answer: Use a combination of CloudHub-deployed and manually provisioned onpremises
Mule runtimes managed by the MuleSoft-hosted Platform control plane.
*****************************************
Key details to be taken from the given scenario:
>> Organization uses BOTH cloud-based and on-premises systems
>> On-premises systems can only be accessed from within the organization's intranet
Let us evaluate the given choices based on above key details:
>> CloudHub-deployed Mule runtimes can ONLY be controlled using MuleSoft-hosted
control plane. We CANNOT use Private Cloud Edition's control plane to control CloudHub
Mule Runtimes. So, option suggesting this is INVALID
>> Using CloudHub-deployed Mule runtimes in the shared worker cloud managed by the
MuleSoft-hosted Anypoint Platform is completely IRRELEVANT to given scenario and silly
choice. So, option suggesting this is INVALID
>> Using an on-premises installation of Mule runtimes that are completely isolated with NO
external network access, managed by the Anypoint Platform Private Cloud Edition control
plane would work for On-premises integrations. However, with NO external access,
integrations cannot be done to SaaS-based apps. Moreover CloudHub-hosted apps are
best-fit for integrating with SaaS-based applications. So, option suggesting this is BEST
WAY.
The best way to configure and use Anypoint Platform to support these mixed/hybrid
integrations is to use a combination of CloudHub-deployed and manually provisioned onpremises
Mule runtimes managed by the MuleSoft-hosted Platform control plane.
An organization is implementing a Quote of the Day API that caches today's quote.
What scenario can use the GoudHub Object Store via the Object Store connector to persist
the cache's state?
A.
When there are three CloudHub deployments of the API implementation to three
separate CloudHub regions that must share the cache state
B.
When there are two CloudHub deployments of the API implementation by two Anypoint
Platform business groups to the same CloudHub region that must share the cache state
C.
When there is one deployment of the API implementation to CloudHub and anottV
deployment to a customer-hosted Mule runtime that must share the cache state
D.
When there is one CloudHub deployment of the API implementation to three CloudHub
workers that must share the cache state
When there is one CloudHub deployment of the API implementation to three CloudHub
workers that must share the cache state
Explanation: Explanation
Correct Answer: When there is one CloudHub deployment of the API implementation to
three CloudHub workers that must share the cache state.
*****************************************
Key details in the scenario:
>> Use the CloudHub Object Store via the Object Store connector
Considering above details:
>> CloudHub Object Stores have one-to-one relationship with CloudHub Mule Applications.
>> We CANNOT use an application's CloudHub Object Store to be shared among multiple
Mule applications running in different Regions or Business Groups or Customer-hosted
Mule Runtimes by using Object Store connector.
>> If it is really necessary and very badly needed, then Anypoint Platform supports a way
by allowing access to CloudHub Object Store of another application using Object Store
REST API. But NOT using Object Store connector.
So, the only scenario where we can use the CloudHub Object Store via the Object Store
connector to persist the cache’s state is when there is one CloudHub deployment of the
API implementation to multiple CloudHub workers that must share the cache state
Which APIs can be used with DataGraph to create a unified schema?

A. APIs 1, 3, 5
B. APIs 2, 4 ,6
C. APIs 1, 2, s5, 6
D. APIs 1, 2, 3, 4
Explanation:
To create a unified schema in MuleSoft's DataGraph, APIs must be exposed
in a way that allows DataGraph to pull and consolidate data from these APIs into a single
schema accessible to consumers. DataGraph provides a federated approach, combining
multiple APIs to form a single, unified API endpoint.
In this setup:
APIs 1, 2, 3, and 4 are suitable candidates for DataGraph because they are hosted
within the Customer VPC on CloudHub and are accessible either through a
Shared Load Balancer (LB) or a Dedicated Load Balancer (DLB). Both of these
load balancers provide public access, which is a necessary condition for
DataGraph as it must access the APIs to aggregate data.
APIs 5 and 6 are hosted on Customer Hosted Server 2, which is explicitly marked
as "Not public". Since DataGraph requires API access through a publicly
reachable endpoint to aggregate them into a unified schema, APIs 5 and 6 cannot
be used with DataGraph in this configuration.
APIs 3 and 4 on Customer Hosted Server 1 appear accessible through a Shared
LB, implying public accessibility that meets DataGraph’s requirements.
By combining APIs 1, 2, 3, and 4 within DataGraph, you can create a unified schema that
enables clients to query data seamlessly from all these APIs as if it were from a single
source.
This setup allows for efficient data retrieval and can simplify API consumption by reducing
the need to call multiple APIs individually, thus optimizing performance and developer
experience.
| Page 1 out of 19 Pages |