Due to a limitation in the backend system, a system API can only handle up to 500
requests per second. What is the best type of API policy to apply to the system API to avoid overloading the backend system?
A.
Rate limiting
B.
HTTP caching
C.
Rate limiting - SLA based
D.
Spike control
Spike control
Explanation: Explanation
Correct Answer: Spike control
*****************************************
>> First things first, HTTP Caching policy is for purposes different than avoiding the
backend system from overloading. So this is OUT.
>> Rate Limiting and Throttling/ Spike Control policies are designed to limit API access, but
have different intentions.
>> Rate limiting protects an API by applying a hard limit on its access.
>> Throttling/ Spike Control shapes API access by smoothing spikes in traffic.
That is why, Spike Control is the right option
Which of the following best fits the definition of API-led connectivity?
A.
API-led connectivity is not just an architecture or technology but also a way to organize people and processes for efficient IT delivery in the organization
B.
API-led connectivity is a 3-layered architecture covering Experience, Process and System layers
C.
API-led connectivity is a technology which enabled us to implement Experience, Process and System layer based APIs
API-led connectivity is not just an architecture or technology but also a way to organize people and processes for efficient IT delivery in the organization
Explanation: Explanation
Correct Answer: API-led connectivity is not just an architecture or technology but also a
way to organize people and processes for efficient IT delivery in the organization.
*****************************************
Reference: https://blogs.mulesoft.com/dev/api-dev/what-is-api-led-connectivity/
What API policy would LEAST likely be applied to a Process API?
A.
Custom circuit breaker
B.
Client ID enforcement
C.
Rate limiting
D.
JSON threat protection
JSON threat protection
Explanation: Explanation
Correct Answer: JSON threat protection
*****************************************
Fact: Technically, there are no restrictions on what policy can be applied in what layer. Any
policy can be applied on any layer API. However, context should also be considered
properly before blindly applying the policies on APIs.
That is why, this question asked for a policy that would LEAST likely be applied to a
Process API.
From the given options:
>> All policies except "JSON threat protection" can be applied without hesitation to the
APIs in Process tier.
>> JSON threat protection policy ideally fits for experience APIs to prevent suspicious
JSON payload coming from external API clients. This covers more of a security aspect by
trying to avoid possibly malicious and harmful JSON payloads from external clients calling
experience APIs.
As external API clients are NEVER allowed to call Process APIs directly and also these
kind of malicious and harmful JSON payloads are always stopped at experience API layer
only using this policy, it is LEAST LIKELY that this same policy is again applied on Process
Layer API.
When using CloudHub with the Shared Load Balancer, what is managed EXCLUSIVELY
by the API implementation (the Mule application) and NOT by Anypoint Platform?
A.
The assignment of each HTTP request to a particular CloudHub worker
B.
The logging configuration that enables log entries to be visible in Runtime Manager
C.
The SSL certificates used by the API implementation to expose HTTPS endpoints
D.
The number of DNS entries allocated to the API implementation
The SSL certificates used by the API implementation to expose HTTPS endpoints
Explanation: Explanation
Correct Answer: The SSL certificates used by the API implementation to expose HTTPS
endpoints
*****************************************
>> The assignment of each HTTP request to a particular CloudHub worker is taken care by
Anypoint Platform itself. We need not manage it explicitly in the API implementation and in
fact we CANNOT manage it in the API implementation.
>> The logging configuration that enables log entries to be visible in Runtime Manager is
ALWAYS managed in the API implementation and NOT just for SLB. So this is not
something we do EXCLUSIVELY when using SLB.
>> We DO NOT manage the number of DNS entries allocated to the API implementation
inside the code. Anypoint Platform takes care of this.
It is the SSL certificates used by the API implementation to expose HTTPS endpoints that
is to be managed EXCLUSIVELY by the API implementation. Anypoint Platform does NOT
do this when using SLBs.
An organization has created an API-led architecture that uses various API layers to integrate mobile clients with a backend system. The backend system consists of a number of specialized components and can be accessed via a REST API. The process and
experience APIs share the same bounded-context model that is different from the backend
data model. What additional canonical models, bounded-context models, or anti-corruption
layers are best added to this architecture to help process data consumed from the backend
system?
A.
Create a bounded-context model for every layer and overlap them when the boundary
contexts overlap, letting API developers know about the differences between upstream and
downstream data models
B.
Create a canonical model that combines the backend and API-led models to simplify
and unify data models, and minimize data transformations.
C.
Create a bounded-context model for the system layer to closely match the backend data
model, and add an anti-corruption layer to let the different bounded contexts cooperate
across the system and process layers
D.
Create an anti-corruption layer for every API to perform transformation for every data
model to match each other, and let data simply travel between APIs to avoid the complexity
and overhead of building canonical models
Create a bounded-context model for the system layer to closely match the backend data
model, and add an anti-corruption layer to let the different bounded contexts cooperate
across the system and process layers
Explanation: Explanation
Correct Answer: Create a bounded-context model for the system layer to closely match the
backend data model, and add an anti-corruption layer to let the different bounded contexts
cooperate across the system and process layers
*****************************************
>> Canonical models are not an option here as the organization has already put in efforts
and created bounded-context models for Experience and Process APIs.
>> Anti-corruption layers for ALL APIs is unnecessary and invalid because it is mentioned
that experience and process APIs share same bounded-context model. It is just the System
layer APIs that need to choose their approach now.
>> So, having an anti-corruption layer just between the process and system layers will work
well. Also to speed up the approach, system APIs can mimic the backend system data
model.
A TemperatureSensors API instance is defined in API Manager in the PROD environment
of the CAR_FACTORY business group. An AcmelemperatureSensors Mule
application implements this API instance and is deployed from Runtime Manager to the
PROD environment of the CAR_FACTORY business group. A policy that requires a valid
client ID and client secret is applied in API Manager to the API instance.
Where can an API consumer obtain a valid client ID and client secret to call the
AcmeTemperatureSensors Mule application?
A. In secrets manager, request access to the Shared Secret static username/password
B. In API Manager, from the PROD environment of the CAR_FACTORY business group
C. In access management, from the PROD environment of the CAR_FACTORY business group
D. In Anypoint Exchange, from an API client application that has been approved for the TemperatureSensors API instance
Explanation:
When an API policy requiring a client ID and client secret is applied to an
API instance in API Manager, API consumers must obtain these credentials through a
registered client application. Here’s how it works:
An application updates an inventory running only one process at any given time to keep the inventory consistent. This process takes 200 milliseconds (.2 seconds) to execute; therefore, the scalability threshold of the application is five requests per second. What is the impact on the application if horizontal scaling is applied, thereby increasing the number of Mule workers?
A. The application scalability threshold is five requests per second regardless of the horizontal scaling
B. The total process execution time is now 100 milliseconds (.1 seconds)
C. The application scalability threshold is now 10 requests per second
D. Horizontal scaling cannot be applied to an already-running application
Explanation:
Given that the application is designed to handle only one process at a time
to maintain data consistency, here’s why horizontal scaling won’t increase the
processing limit:
Single-Process Constraint:
An API has been updated in Anypoint exchange by its API producer from version 3.1.1 to
3.2.0 following accepted semantic versioning practices and the changes have been
communicated via the APIs public portal. The API endpoint does NOT change in the new
version. How should the developer of an API client respond to this change?
A.
The API producer should be requested to run the old version in parallel with the new one
B.
The API producer should be contacted to understand the change to existing functionality
C.
The API client code only needs to be changed if it needs to take advantage of the new features
D.
The API clients need to update the code on their side and need to do full regression
The API client code only needs to be changed if it needs to take advantage of the new features
| Page 1 out of 19 Pages |