Mulesoft MCPA-Level-1 Exam Questions

151 Questions


Updation Date : 1-Dec-2025



Mulesoft MCPA-Level-1 exam questions feature realistic, exam-like questions that cover all key topics with detailed explanations. You’ll identify your strengths and weaknesses, allowing you to focus your study efforts effectively. By practicing with our MCPA-Level-1 practice test, you’ll gain the knowledge, speed, and confidence needed to pass the Mulesoft exam on your first attempt.

Why leave your success to chance? Our Mulesoft MCPA-Level-1 dumps are your ultimate guide to passing the exam on your first try!

An organization has built an application network following the API-led connectivity approach recommended by MuleSoft. To protect the application network against attacks from malicious external API clients, the organization plans to apply JSON Threat Protection policies. To which API-led connectivity layer should the JSON Threat Protection policies most commonly be applied?


A. All layers


B. System layer


C. Process layer


D. Experience layer





D.
  Experience layer

A company wants to move its Mule API implementations into production as quickly as
possible. To protect access to all Mule application data and metadata, the company
requires that all Mule applications be deployed to the company's customer-hosted
infrastructure within the corporate firewall. What combination of runtime plane and control
plane options meets these project lifecycle goals?


A.

Manually provisioned customer-hosted runtime plane and customer-hosted control plane


B.

MuleSoft-hosted runtime plane and customer-hosted control plane


C.

Manually provisioned customer-hosted runtime plane and MuleSoft-hosted control plane


D.

iPaaS provisioned customer-hosted runtime plane and MuleSoft-hosted control plane





A.
  

Manually provisioned customer-hosted runtime plane and customer-hosted control plane



Explanation:
Explanation
Correct Answer: Manually provisioned customer-hosted runtime plane and customerhosted
control plane
*****************************************
There are two key factors that are to be taken into consideration from the scenario given in
the question.
>> Company requires both data and metadata to be resided within the corporate firewall
>> Company would like to go with customer-hosted infrastructure.
Any deployment model that is to deal with the cloud directly or indirectly (Mulesoft-hosted
or Customer's own cloud like Azure, AWS) will have to share atleast the metadata.
Application data can be controlled inside firewall by having Mule Runtimes on customer
hosted runtime plane. But if we go with Mulsoft-hosted/ Cloud-based control plane, the
control plane required atleast some minimum level of metadata to be sent outside the
corporate firewall.
As the customer requirement is pretty clear about the data and metadata both to be within
the corporate firewall, even though customer wants to move to production as quickly as
possible, unfortunately due to the nature of their security requirements, they have no other
option but to go with manually provisioned customer-hosted runtime plane and customerhosted
control plane.

A customer wants to monitor and gain insights about the number of requests coming in a given time period as well as to measure key performance indicators (response times, CPU utilization, number of active APIs).
Which tool provides these data insights?


A. Anypoint Monitoring


B. APT Manager


C. Runtime Alerts


D. Functional Monitoring





A.
  Anypoint Monitoring

What API policy would LEAST likely be applied to a Process API?


A.

Custom circuit breaker


B.

Client ID enforcement


C.

Rate limiting


D.

JSON threat protection





D.
  

JSON threat protection



Explanation: Explanation
Correct Answer: JSON threat protection
*****************************************
Fact: Technically, there are no restrictions on what policy can be applied in what layer. Any
policy can be applied on any layer API. However, context should also be considered
properly before blindly applying the policies on APIs.
That is why, this question asked for a policy that would LEAST likely be applied to a
Process API.
From the given options:
>> All policies except "JSON threat protection" can be applied without hesitation to the
APIs in Process tier.
>> JSON threat protection policy ideally fits for experience APIs to prevent suspicious
JSON payload coming from external API clients. This covers more of a security aspect by
trying to avoid possibly malicious and harmful JSON payloads from external clients calling
experience APIs.
As external API clients are NEVER allowed to call Process APIs directly and also these
kind of malicious and harmful JSON payloads are always stopped at experience API layer
only using this policy, it is LEAST LIKELY that this same policy is again applied on Process
Layer API.

A large organization with an experienced central IT department is getting started using MuleSoft. There is a project to connect a siloed back-end system to a new Customer Relationship Management (CRM) system. The Center for Enablement is coaching them to use API-led connectivity. What action would support the creation of an application network using API-led connectivity?


A. Invite the business analyst to create a business process model to specify the canonical data model between the two systems


B. Determine if the new CRM system supports the creation of custom: REST APIs, establishes 4 private network with CloudHub, and supports GAuth 2.0 authentication


C. To expedite this project, central IT should extend the CRM system and back-end systems to connect to one another using built in integration interfaces


D. Create a System API to unlock the data on the back-end system using a REST API





D.
  Create a System API to unlock the data on the back-end system using a REST API

Explanation:
For an organization starting with API-led connectivity to integrate a siloed back-end system with a new CRM, the following approach aligns with best practices and MuleSoft’s Center for Enablement (C4E) guidance:
API-led Connectivity: This model organizes APIs into distinct layers (System, Process, and Experience) to improve reusability, modularity, and manageability.

  • Step to Support Application Network:
  • Why Option D is Correct:
  • Explanation of Incorrect Options:

4 Production environment is running on a dedicated Virtual Private Cloud (VPC) on CloudHub 1,0, and the security team guidelines clearly state no traffic on HTTP. Which two options support these security guidelines?


A. Option A


B. Option B


C. Option C


D. Option D


E. Option E





A.
  Option A

C.
  Option C

A Mule application exposes an HTTPS endpoint and is deployed to the CloudHub Shared Worker Cloud. All traffic to that Mule application must stay inside the AWS VPC. To what TCP port do API invocations to that Mule application need to be sent?


A.

443


B.

8081


C.

8091


D.

8082





D.
  

8082



Explanation: Explanation
Correct Answer: 8082
*****************************************
>> 8091 and 8092 ports are to be used when keeping your HTTP and HTTPS app private
to the LOCAL VPC respectively.
>> Above TWO ports are not for Shared AWS VPC/ Shared Worker Cloud.
>> 8081 is to be used when exposing your HTTP endpoint app to the internet through
Shared LB
>> 8082 is to be used when exposing your HTTPS endpoint app to the internet through
Shared LB
So, API invocations should be sent to port 8082 when calling this HTTPS based app.
References:
https://docs.mulesoft.com/runtime-manager/cloudhub-networking-guide
https://help.mulesoft.com/s/article/Configure-Cloudhub-Application-to-Send-a-HTTPSRequest-
Directly-to-Another-Cloudhub-Application
https://help.mulesoft.com/s/question/0D52T00004mXXULSA4/multiple-http-listerners-oncloudhub-
one-with-port-9090

What best describes the Fully Qualified Domain Names (FQDNs), also known as DNS entries, created when a Mule application is deployed to the CloudHub Shared Worker Cloud?


A.

A fixed number of FQDNs are created, IRRESPECTIVE of the environment and VPC design


B.

The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region


C.

The FQDNs are determined by the application name, but can be modified by an
administrator after deployment


D.

The FQDNs are determined by both the application name and the Anypoint Platform
organization





B.
  

The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region



Explanation: Explanation
Correct Answer: The FQDNs are determined by the application name chosen,
IRRESPECTIVE of the region
*****************************************
>> When deploying applications to Shared Worker Cloud, the FQDN are always
determined by application name chosen.
>> It does NOT matter what region the app is being deployed to.
>> Although it is fact and true that the generated FQDN will have the region included in it
(Ex: exp-salesorder-api.au-s1.cloudhub.io), it does NOT mean that the same name can be
used when deploying to another CloudHub region.
>> Application name should be universally unique irrespective of Region and Organization
and solely determines the FQDN for Shared Load Balancers


Page 1 out of 19 Pages