What best explains the use of auto-discovery in API implementations?
A. It makes API Manager aware of API implementations and hence enables it to enforce policies
B. It enables Anypoint Studio to discover API definitions configured in Anypoint Platform
C. It enables Anypoint Exchange to discover assets and makes them available for reuse
D. It enables Anypoint Analytics to gain insight into the usage of APIs
Explanation: Explanation
Correct Answer: It makes API Manager aware of API implementations and hence enables it
to enforce policies.
*****************************************
>> API Autodiscovery is a mechanism that manages an API from API Manager by pairing
the deployed application to an API created on the platform.
>> API Management includes tracking, enforcing policies if you apply any, and reporting
API analytics.
>> Critical to the Autodiscovery process is identifying the API by providing the API name
and version.
References:
https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept
https://docs.mulesoft.com/api-manager/1.x/api-auto-discovery
https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept
An organization has several APIs that accept JSON data over HTTP POST. The APIs are
all publicly available and are associated with several mobile applications and web
applications.
The organization does NOT want to use any authentication or compliance policies for these
APIs, but at the same time, is worried that some bad actor could send payloads that could
somehow compromise the applications or servers running the API implementations.
What out-of-the-box Anypoint Platform policy can address exposure to this threat?
A.
Shut out bad actors by using HTTPS mutual authentication for all API invocations
B.
Apply an IP blacklist policy to all APIs; the blacklist will Include all bad actors
C.
Apply a Header injection and removal policy that detects the malicious data before it is used
D.
Apply a JSON threat protection policy to all APIs to detect potential threat vectors
Apply a JSON threat protection policy to all APIs to detect potential threat vectors
Explanation: Explanation
Correct Answer: Apply a JSON threat protection policy to all APIs to detect potential threat
vectors
*****************************************
>> Usually, if the APIs are designed and developed for specific consumers (known
consumers/customers) then we would IP Whitelist the same to ensure that traffic only
comes from them.
>> However, as this scenario states that the APIs are publicly available and being used by
so many mobile and web applications, it is NOT possible to identify and blacklist all
possible bad actors.
>> So, JSON threat protection policy is the best chance to prevent any bad JSON payloads
from such bad actors.
A large organization with an experienced central IT department is getting started using MuleSoft. There is a project to connect a siloed back-end system to a new Customer Relationship Management (CRM) system. The Center for Enablement is coaching them to use API-led connectivity. What action would support the creation of an application network using API-led connectivity?
A. Invite the business analyst to create a business process model to specify the canonical data model between the two systems
B. Determine if the new CRM system supports the creation of custom: REST APIs, establishes 4 private network with CloudHub, and supports GAuth 2.0 authentication
C. To expedite this project, central IT should extend the CRM system and back-end systems to connect to one another using built in integration interfaces
D. Create a System API to unlock the data on the back-end system using a REST API
Explanation:
For an organization starting with API-led connectivity to integrate a siloed
back-end system with a new CRM, the following approach aligns with best practices and
MuleSoft’s Center for Enablement (C4E) guidance:
API-led Connectivity: This model organizes APIs into distinct layers (System,
Process, and Experience) to improve reusability, modularity, and manageability.
A company uses a hybrid Anypoint Platform deployment model that combines the EU
control plane with customer-hosted Mule runtimes. After successfully testing a Mule API
implementation in the Staging environment, the Mule API implementation is set with
environment-specific properties and must be promoted to the Production environment.
What is a way that MuleSoft recommends to configure the Mule API implementation and
automate its promotion to the Production environment?
A.
Bundle properties files for each environment into the Mule API implementation's deployable
archive, then promote the Mule API implementation to the Production environment using
Anypoint CLI or the Anypoint Platform REST APIsB.
B.
Modify the Mule API implementation's properties in the API Manager Properties tab, then
promote the Mule API implementation to the Production environment using API Manager
C.
Modify the Mule API implementation's properties in Anypoint Exchange, then promote the
Mule API implementation to the Production environment using Runtime Manager
D.
Use an API policy to change properties in the Mule API implementation deployed to the
Staging environment and another API policy to deploy the Mule API implementation to the
Production environment
Bundle properties files for each environment into the Mule API implementation's deployable
archive, then promote the Mule API implementation to the Production environment using
Anypoint CLI or the Anypoint Platform REST APIsB.
Explanation: Explanation
Correct Answer: Bundle properties files for each environment into the Mule API
implementation's deployable archive, then promote the Mule API implementation to the
Production environment using Anypoint CLI or the Anypoint Platform REST APIs
*****************************************
>> Anypoint Exchange is for asset discovery and documentation. It has got no provision to
modify the properties of Mule API implementations at all.
>> API Manager is for managing API instances, their contracts, policies and SLAs. It has
also got no provision to modify the properties of API implementations.
>> API policies are to address Non-functional requirements of APIs and has again got no
provision to modify the properties of API implementations.
So, the right way and recommended way to do this as part of development practice is to
bundle properties files for each environment into the Mule API implementation and just
point and refer to respective file per environment.
What are 4 important Platform Capabilities offered by Anypoint Platform?
A.
API Versioning, API Runtime Execution and Hosting, API Invocation, API Consumer Engagement
B.
API Design and Development, API Runtime Execution and Hosting, API Versioning, API
Deprecation
C.
API Design and Development, API Runtime Execution and Hosting, API Operations and
Management, API Consumer Engagement
D.
API Design and Development, API Deprecation, API Versioning, API Consumer
Engagement
API Design and Development, API Runtime Execution and Hosting, API Operations and
Management, API Consumer Engagement
Explanation: Explanation
Correct Answer: API Design and Development, API Runtime Execution and Hosting, API
Operations and Management, API Consumer Engagement
*****************************************
>> API Design and Development - Anypoint Studio, Anypoint Design Center, Anypoint
Connectors
>> API Runtime Execution and Hosting - Mule Runtimes, CloudHub, Runtime Services
>> API Operations and Management - Anypoint API Manager, Anypoint Exchange
>> API Consumer Management - API Contracts, Public Portals, Anypoint Exchange, API
Notebooks
A company deploys Mule applications with default configurations through Runtime Manager to customer-hosted Mule runtimes. Each Mule application is an API implementation that exposes RESTful interfaces to API clients. The Mule runtimes are managed by the MuleSoft-hosted control plane. The payload is never used by any Logger components. When an API client sends an HTTP request to a customer-hosted Mule application, which metadata or data (payload) is pushed to the MuleSoft-hosted control plane?
A. Only the data
B. No data
C. The data and metadata
D. Only the metadata
A retail company with thousands of stores has an API to receive data about purchases and
insert it into a single database. Each individual store sends a batch of purchase data to the
API about every 30 minutes. The API implementation uses a database bulk insert
command to submit all the purchase data to a database using a custom JDBC driver
provided by a data analytics solution provider. The API implementation is deployed to a
single CloudHub worker. The JDBC driver processes the data into a set of several
temporary disk files on the CloudHub worker, and then the data is sent to an analytics
engine using a proprietary protocol. This process usually takes less than a few minutes.
Sometimes a request fails. In this case, the logs show a message from the JDBC driver
indicating an out-of-file-space message. When the request is resubmitted, it is successful.
What is the best way to try to resolve this throughput issue?
A.
se a CloudHub autoscaling policy to add CloudHub workers
B.
Use a CloudHub autoscaling policy to increase the size of the CloudHub worker
C.
Increase the size of the CloudHub worker(s)
D.
Increase the number of CloudHub workers
Increase the number of CloudHub workers
Explanation: Explanation
Correct Answer: Increase the size of the CloudHub worker(s)
*****************************************
The key details that we can take out from the given scenario are:
>> API implementation uses a database bulk insert command to submit all the purchase
data to a database
>> JDBC driver processes the data into a set of several temporary disk files on the
CloudHub worker
>> Sometimes a request fails and the logs show a message indicating an out-of-file-space
message
Based on above details:
>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules
based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages
and not due to some given error or disk space issues.
>> Increasing the number of CloudHub workers also does NOT help here because the
reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to
disk-space.
>> Moreover, the API is doing bulk insert to submit the received batch data. Which means,
all data is handled by ONE worker only at a time. So, the disk space issue should be
tackled on "per worker" basis. Having multiple workers does not help as the batch may still
fail on any worker when disk is out of space on that particular worker.
Therefore, the right way to deal this issue and resolve this is to increase the vCore size of
the worker so that a new worker with more disk space will be provisioned.
An organization wants to make sure only known partners can invoke the organization's
APIs. To achieve this security goal, the organization wants to enforce a Client ID
Enforcement policy in API Manager so that only registered partner applications can invoke
the organization's APIs. In what type of API implementation does MuleSoft recommend
adding an API proxy to enforce the Client ID Enforcement policy, rather than embedding
the policy directly in the application's JVM?
A.
A Mule 3 application using APIkit
B.
A Mule 3 or Mule 4 application modified with custom Java code
C.
A Mule 4 application with an API specification
D.
A Non-Mule application
A Non-Mule application
Explanation: Explanation
Correct Answer: A Non-Mule application
*****************************************
>> All type of Mule applications (Mule 3/ Mule 4/ with APIkit/ with Custom Java Code etc)
running on Mule Runtimes support the Embedded Policy Enforcement on them.
>> The only option that cannot have or does not support embedded policy enforcement
and must have API Proxy is for Non-Mule Applications.
So, Non-Mule application is the right answer
| Page 1 out of 19 Pages |