Traffic is routed through an API proxy to an API implementation. The API proxy is managed
by API Manager and the API implementation is deployed to a CloudHub VPC using
Runtime Manager. API policies have been applied to this API. In this deployment scenario,
at what point are the API policies enforced on incoming API client requests?
A.
At the API proxy
B.
At the API implementation
C.
At both the API proxy and the API implementation
D.
At a MuleSoft-hosted load balancer
At the API proxy
Explanation: Explanation
Correct Answer: At the API proxy
*****************************************
>> API Policies can be enforced at two places in Mule platform.
>> One - As an Embedded Policy enforcement in the same Mule Runtime where API
implementation is running.
>> Two - On an API Proxy sitting in front of the Mule Runtime where API implementation is
running.
>> As the deployment scenario in the question has API Proxy involved, the policies will be
enforced at the API Proxy.
A company uses a hybrid Anypoint Platform deployment model that combines the EU
control plane with customer-hosted Mule runtimes. After successfully testing a Mule API
implementation in the Staging environment, the Mule API implementation is set with
environment-specific properties and must be promoted to the Production environment.
What is a way that MuleSoft recommends to configure the Mule API implementation and
automate its promotion to the Production environment?
A.
Bundle properties files for each environment into the Mule API implementation's deployable
archive, then promote the Mule API implementation to the Production environment using
Anypoint CLI or the Anypoint Platform REST APIsB.
B.
Modify the Mule API implementation's properties in the API Manager Properties tab, then
promote the Mule API implementation to the Production environment using API Manager
C.
Modify the Mule API implementation's properties in Anypoint Exchange, then promote the
Mule API implementation to the Production environment using Runtime Manager
D.
Use an API policy to change properties in the Mule API implementation deployed to the
Staging environment and another API policy to deploy the Mule API implementation to the
Production environment
Bundle properties files for each environment into the Mule API implementation's deployable
archive, then promote the Mule API implementation to the Production environment using
Anypoint CLI or the Anypoint Platform REST APIsB.
Explanation: Explanation
Correct Answer: Bundle properties files for each environment into the Mule API
implementation's deployable archive, then promote the Mule API implementation to the
Production environment using Anypoint CLI or the Anypoint Platform REST APIs
*****************************************
>> Anypoint Exchange is for asset discovery and documentation. It has got no provision to
modify the properties of Mule API implementations at all.
>> API Manager is for managing API instances, their contracts, policies and SLAs. It has
also got no provision to modify the properties of API implementations.
>> API policies are to address Non-functional requirements of APIs and has again got no
provision to modify the properties of API implementations.
So, the right way and recommended way to do this as part of development practice is to
bundle properties files for each environment into the Mule API implementation and just
point and refer to respective file per environment.
An API implementation returns three X-RateLimit-* HTTP response headers to a requesting API client. What type of information do these response headers indicate to the API client?
A.
The error codes that result from throttling
B.
A correlation ID that should be sent in the next request
C.
The HTTP response size
D.
The remaining capacity allowed by the API implementation
The remaining capacity allowed by the API implementation
Explanation: Explanation
Correct Answer: The remaining capacity allowed by the API implementation.
*****************************************
>> Reference: https://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling-slabased-
policies#response-headers
What is true about the technology architecture of Anypoint VPCs?
A.
The private IP address range of an Anypoint VPC is automatically chosen by CloudHub
B.
Traffic between Mule applications deployed to an Anypoint VPC and on-premises
systems can stay within a private network
C.
Each CloudHub environment requires a separate Anypoint VPC
D.
VPC peering can be used to link the underlying AWS VPC to an on-premises (non
AWS) private network
Traffic between Mule applications deployed to an Anypoint VPC and on-premises
systems can stay within a private network
Explanation: Explanation
Correct Answer: Traffic between Mule applications deployed to an Anypoint VPC and onpremises
systems can stay within a private network
*****************************************
>> The private IP address range of an Anypoint VPC is NOT automatically chosen by
CloudHub. It is chosen by us at the time of creating VPC using thr CIDR blocks.
CIDR Block: The size of the Anypoint VPC in Classless Inter-Domain Routing (CIDR)
notation.
For example, if you set it to 10.111.0.0/24, the Anypoint VPC is granted 256 IP addresses
from 10.111.0.0 to 10.111.0.255.
Ideally, the CIDR Blocks you choose for the Anypoint VPC come from a private IP space,
and should not overlap with any other Anypoint VPC’s CIDR Blocks, or any CIDR Blocks in
use in your corporate network.
What best explains the use of auto-discovery in API implementations?
A. It makes API Manager aware of API implementations and hence enables it to enforce policies
B. It enables Anypoint Studio to discover API definitions configured in Anypoint Platform
C. It enables Anypoint Exchange to discover assets and makes them available for reuse
D. It enables Anypoint Analytics to gain insight into the usage of APIs
Explanation: Explanation
Correct Answer: It makes API Manager aware of API implementations and hence enables it
to enforce policies.
*****************************************
>> API Autodiscovery is a mechanism that manages an API from API Manager by pairing
the deployed application to an API created on the platform.
>> API Management includes tracking, enforcing policies if you apply any, and reporting
API analytics.
>> Critical to the Autodiscovery process is identifying the API by providing the API name
and version.
References:
https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept
https://docs.mulesoft.com/api-manager/1.x/api-auto-discovery
https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept
A company has created a successful enterprise data model (EDM). The company is
committed to building an application network by adopting modern APIs as a core enabler of
the company's IT operating model. At what API tiers (experience, process, system) should
the company require reusing the EDM when designing modern API data models?
A.
At the experience and process tiers
B.
At the experience and system tiers
C.
At the process and system tiers
D.
At the experience, process, and system tiers
At the process and system tiers
Explanation: Explanation Correct Answer: At the process and system tiers
*****************************************
>> Experience Layer APIs are modeled and designed exclusively for the end user's
experience. So, the data models of experience layer vary based on the nature and type of
such API consumer. For example, Mobile consumers will need light-weight data models to
transfer with ease on the wire, where as web-based consumers will need detailed data
models to render most of the info on web pages, so on. So, enterprise data models fit for
the purpose of canonical models but not of good use for experience APIs.
>> That is why, EDMs should be used extensively in process and system tiers but NOT in
experience tier.
Refer to the exhibit.
A developer is building a client application to invoke an API deployed to the STAGING
environment that is governed by a client ID enforcement policy.
What is required to successfully invoke the API?
A.
The client ID and secret for the Anypoint Platform account owning the API in the STAGING environment
B.
The client ID and secret for the Anypoint Platform account's STAGING environment
C.
The client ID and secret obtained from Anypoint Exchange for the API instance in the
STAGING environment
D.
A valid OAuth token obtained from Anypoint Platform and its associated client ID and
secret
The client ID and secret obtained from Anypoint Exchange for the API instance in the
STAGING environment
Explanation: Explanation
Correct Answer: The client ID and secret obtained from Anypoint Exchange for the API
instance in the STAGING environment
*****************************************
>> We CANNOT use the client ID and secret of Anypoint Platform account or any individual
environments for accessing the APIs
>> As the type of policy that is enforced on the API in question is "Client ID Enforcment
Policy", OAuth token based access won't work.
Right way to access the API is to use the client ID and secret obtained from Anypoint
Exchange for the API instance in a particular environment we want to work on.
References:
Managing API instance Contracts on API Manager
https://docs.mulesoft.com/api-manager/1.x/request-access-to-api-task
https://docs.mulesoft.com/exchange/to-request-access
https://docs.mulesoft.com/api-manager/2.x/policy-mule3-client-id-based-policies
A Mule application exposes an HTTPS endpoint and is deployed to the CloudHub Shared Worker Cloud. All traffic to that Mule application must stay inside the AWS VPC. To what TCP port do API invocations to that Mule application need to be sent?
A.
443
B.
8081
C.
8091
D.
8082
8082
Explanation: Explanation
Correct Answer: 8082
*****************************************
>> 8091 and 8092 ports are to be used when keeping your HTTP and HTTPS app private
to the LOCAL VPC respectively.
>> Above TWO ports are not for Shared AWS VPC/ Shared Worker Cloud.
>> 8081 is to be used when exposing your HTTP endpoint app to the internet through
Shared LB
>> 8082 is to be used when exposing your HTTPS endpoint app to the internet through
Shared LB
So, API invocations should be sent to port 8082 when calling this HTTPS based app.
References:
https://docs.mulesoft.com/runtime-manager/cloudhub-networking-guide
https://help.mulesoft.com/s/article/Configure-Cloudhub-Application-to-Send-a-HTTPSRequest-
Directly-to-Another-Cloudhub-Application
https://help.mulesoft.com/s/question/0D52T00004mXXULSA4/multiple-http-listerners-oncloudhub-
one-with-port-9090
| Page 1 out of 19 Pages |