2013 Latest Cisco 350-001 Exam Section 3: Congestion Avoidance (9

2013 Latest Cisco 350-001 Exam Section 3: Congestion Avoidance (9 Questions)

You wish to enable the Resource Reservation protocol on one of the interfaces of a router. Which of the following commands will accomplish this?
A. ip rsvp sender
B. ip rsvp enable
C. ip rsvp bandwidth
D. rsvp enable
E. ip rsvp reservation
F. RSVP is enabled in global configuration mode, not in interface configuration mode.
Answer: C
ip rsvp bandwidth is the command that enables RSVP
Incorrect Answers:
F. RSVP is configured on a per interface basis, not in global configuration mode.

Which of the following statements is valid regarding Custom Queuing?
A. Custom queuing always services the highest priority traffic first before servicing the lower priority traffic.
B. Custom queuing looks at groups of packets from the similar source-destination pairs.
C. Custom queuing processes the queue based on the number of packets sent.
D. Custom queuing will not proceed to a next queue unless the current queue is empty.
E. Custom queuing can prevent one type of traffic from saturating the entire link.
Answer: E
CQ allows fairness not provided with priority queuing (PQ). With CQ, you can control the available bandwidth on an interface when it is unable to accommodate the aggregate traffic enqueued. Associated with each output queue is a configurable byte count, which specifies how many bytes of data should be delivered from the current queue by the system before the system moves on to the next queue. When a particular queue is being processed, packets are sent until the number of bytes sent exceeds the queue byte count defined by the queue-list queue byte-count command, or until the queue is empty. With custom queuing, all queues will be serviced. With priority queuing, a bandwidth hog can dominate the link.
Incorrect Answers:
A, D Custom queue uses a round robin mechanism, ensuring that one type of traffic (even
ones with the highest priority) does not completely starve out the lower priority queues.
Once the byte count for that queue is fulfilled, the next queue is serviced.
B, C. Custom queues are serviced based on the number of bytes sent for each queue, not
on the number of packets sent. This prevents traffic with bigger packets (such as FTP)
from dominating a link with smaller packets (such as a RTP session).

Due to intermittent congestion issues on a link, Committed Access Rate (CAR) has been
configured on an interface. During a period of congestion, a packet arrives that causes the
compounded debt to be greater than the value set for the extended burst. Which of the
following will occur due to this? (Choose all that apply).

A. CAR’s exceed action takes effect, dropping the packet.
B. A token is removed from the bucket.
C. The packet will be queued and eventually serviced.
D. The compounded debt value is effectively set to zero (0).
E. The packet is buffered by the CAR process.
Answer: A, D
Here is how the extended burst capability works. If a packet arrives and needs to borrow n number of tokens because the token bucket contains fewer tokens than its packet size requires, then CAR compares the following two values:
Extended burst parameter value
Compounded debt. Compounded debt is computed as the sum over all ai.
1. i indicates the ith packet that attempts to borrow tokens since the last time a packet was dropped.
a indicates the actual debt value of the flow after packet i is sent. Actual debt is simply a
count of how many tokens the flow has currently borrowed.

If the compounded debt is greater than the extended burst value, CAR’s exceed action takes effect. After a packet is dropped, the compounded debt is effectively set to 0. CAR will compute a new compounded debt value equal to the actual debt for the next packet that needs to borrow tokens. If the actual debt is greater than the extended limit, all packets will be dropped until the actual debt is reduced through accumulation of tokens in the token bucket.
Incorrect Answers:
B. Dropped packets do not count against any rate or burst limit. That is, when a packet is
dropped, no tokens are removed from the token bucket.
C, E. After the exceed action takes place, the packet is dropped immediately and is not


In an effort to minimize the risks associated from DOS and ICMP flooding attacks, the following is configured on the serial interface of a router:
interface serial 0
rate-limit input access-group 199 128000 4000 4000 conform-action
transmit exceed-action drop

access-list 199 permit icmp any any

What QoS feature is this an example of?
Answer: D
Committed Access Rate (CAR) is used to rate limit traffic. In this example, all ICMP
traffic that exceeds the defined level will be dropped. This will prevent an ICMP flood
attack from saturating the link.
CAR definition:
Rate limiting is one mechanism to use to allow a network to run in a degraded manner,
but remain up when it is receiving a stream of Denial of Service (DoS) attack packets as
well actual network traffic. Rate limiting can be achieved in a number of methods using
Cisco IOS. software. Namely, through Committed Access Rate (CAR), Traffic Shaping,
and both Shaping and Policing through Modular Quality of Service Command Line
Interface (QoS CLI).

Incorrect Answers:
A. Class-based weighted fair queuing (CBWFQ) extends the standard WFQ functionality to provide support for user-defined traffic classes. For CBWFQ, you define traffic classes based on match criteria including protocols, access control lists (ACLs), and input interfaces Packets satisfying the match criteria for a class constitute the traffic for that class. A queue is reserved for each class, and traffic belonging to a class is directed to the queue for that class Reference: http://www.cisco.com/en/US/products/sw/iosswrel/ps1830/products_feature_guide09186a0080087a84.htm
B, C. RSVP and LLQ (low latency queuing) are often implemented in voice and video data networks, but are not typically used for preventing DOS attacks.
F. FRTS is frame relay traffic shaping. It is not clear from this example that the link is
even using frame relay as the transport link.


Which of the following are functions of Random Early Discard (RED)? (Choose all that apply)
A. To avoid global synchronization for TCP traffic.
B. To provide unbiased support for bursty traffic.
C. To minimize packet delay jitter.
D. To ensure that high priority traffic gets sent first.
E. To prevent the starvation of the lower priority queues.
Answer: A, B, C
Explanation: When it comes to Quality of Service, there are 2 separate approaches. The first is congestion management, which is setting up queues to ensure that the higher priority traffic gets serviced in times of congestion. The other is congestion avoidance, which works by dropping packets before congestion on the link occurs. Random Early Detection (RED) is a congestion avoidance mechanism that takes advantage of TCP’s congestion control mechanism. RED takes a proactive approach to congestion. Instead of waiting until the queue is completely filled up, RED starts dropping packets with a non-zero drop probability after the average queue size exceeds a certain minimum threshold. A drop probability ensures that RED randomly drops packets from only a few flows, avoiding global synchronization. A packet drop is meant to signal the TCP source to slow down. Responsive TCP flows slow down after packet loss by going into slow start mode.
Incorrect Answers:
D. This would be a function of priority queuing, not RED. Weighted RED (WRED) is
used to assign priorities to traffic and works to not drop the higher priority traffic types,
but RED does not.

E. This is a function of custom queuing, which is a congestion management mechanism,
not a congestion avoidance mechanism such as RED.
‘IP Quality of Service’ page 130, Cisco Press.

You issue the following configuration change on router TK1:

ip rsvp sender UDP 3030 serial0 20

What is the effect of this change?
A. The router will simulate receiving RSVP PATH messages destined to multicast
address from source The previous hop of the PATH message is, and the message was received on interface serial 0.
B. The router will simulate generating RSVP RESV messages destined to multicast
address from source The next hop of the PATH message is, and the message was received on interface serial 0.
C. The router will act as if it was sending RSVP PATH messages destined to multicast
address from source The next hop of the PATH message is, and the message was received on interface serial 0.
D. The router will act as if it was receiving RSVP RESV messages destined to multicast
address from source The previous hop of the PATH message is, and the message was received on interface serial 0.
Answer: A
Explanation: This command causes the router to act as if it were receiving PATH messages destined to multicast address from a source The previous hop of the PATH message is, and the message was received on interface Serial 0.
To enable a router to simulate receiving and forwarding Resource Reservation Protocol (RSVP) PATH messages, use the ip rsvp sender global configuration command. To disable this feature, use the no form of this command. ip rsvp sender session-ip-address sender-ip-address {tcp udp ip-protocol} session-dport sender-sport previous-hop-ip-address previous-hop-interface bandwidth burst-size Incorrect Answers:
B. This answer describes the “ip rsvp reservation-host” command.
C. This answer describes the “ip rsvp sender-host” command
D. The “ip rsvp sender” command simulates a host that is receiving PATH messages, not RESV messages.

Rate Limiting is configured on the Ethernet interface of a router as follows:
interface Ethernet 0

rate-limit input access-group rate limit 1 1000000 10000 10000

access-list rate-limit 1 mask 07

What effect will this configuration have?
A. The command access rate policing limits all TCP traffic to 10Mbps.
B. Traffic matching access-list 7 is rate limited.
C. Voice traffic with DiffServ code point 43 is guaranteed.
D. Traffic with IP Precedence values of 0, 1, and 2 will be policed.
Answer: D
Use the mask keyword to assign multiple IP precedence’s to the same rate-limit list. To determine the mask value, perform the following steps: Step 1 Decide which precedence’s you want to assign to this rate-limit access list. Step 2 Convert the precedence’s into an 8-bit numbers with each bit corresponding to one precedence. For example, an IP precedence of 0 corresponds to 00000001, 1 corresponds to 00000010, 6 corresponds to 01000000, and 7 corresponds to 10000000. Step 3 Add the 8-bit numbers for the selected precedence’s together. For example, the mask for precedence’s 1 and 6 is 01000010. Step 4 Convert the binary mark into the corresponding hexadecimal number. For example, 01000010 becomes 0x42. This value is used in the access-list rate-limit command. Any packets that have an IP precedence of 1 or 6 will match this access list. A mask of FF matches any precedence, and 00 does not match any precedence.
In this example, a mask of 07 translates to 00000111, so IP precedence 0, 1, and 2 will be policed.

When configuring Low Latency Queuing (LLQ), a bandwidth parameter is needed. What does this parameter specify?
A. It provides a built in policer to limit the priority traffic in the LLQ during congestion.
B. This parameter is optional, since the LLQ will always have precedence over other queues.
C. This parameter should be as low as possible. It represents bandwidth which will always be reserved. It reduces the amount of bandwidth on the interface, even if it is not used by any LLQ traffic.
D. It represents the reference CIR to calculate the burst size of the token bucket of the built-in policer.
E. None of the above.
Answer: A
The bandwidth argument is used to specify the maximum amount of bandwidth allocated for packets belonging to a class configured with the priority command. The bandwidth parameter both guarantees bandwidth to the priority class and restrains the flow of packets from the priority class. When the device is not congested, the priority class traffic is allowed to exceed its allocated bandwidth. When the device is congested, the priority class traffic above the allocated bandwidth is discarded.

What statement is FALSE with regards to Weighted RED (WRED)?

A. WRED is a congestion avoidance mechanism, based on the adaptive nature of TCP traffic for congestion.
B. WRED is a queuing feature.
C. WRED allows for differentiated dropping behavior based on either IP precedence or DSCP.
D. WRED is configurable in a CBWFQ policy-map.
E. All of the above are false statements.
Answer: B Explanation
The WRED algorithm provides congestion avoidance on network interfaces by providing buffer management, andby allowing Transmission Control Protocol (TCP) traffic to throttle back before buffers are exhausted. This helps avoid tail drops and global synchronization issues, maximizing network usage and TCP-based application performance. WRED works by selectively dropping packets before congestion occurs, so it is considered to be a congestion avoidance feature, not a queuing feature.
Incorrect Answers:
A. WRED is only useful when the bulk of the traffic is TCP/IP traffic. With TCP, dropped packets indicate congestion, so the packet source will reduce its transmission rate. With other protocols, packet sources may not respond or may resend dropped packets at the same rate. Thus, dropping packets does not decrease congestion.
C. WRED works with the IP precedence or DSCP values to determine which packets get dropped first. You can configure WRED to ignore IP Precedence when making drop decisions so that nonweighted RED behavior is achieved.
D. WRED can indeed be configured in a policy map that is applied to class based weighted fair queuing as specified in the following: http://www.cisco.com/en/US/products/sw/iosswrel/ps1829/products_feature_guide09186a00801b2406.htm
Topic 6: WAN (23 Questions)
Ensurepass offers Latest 2013 CCIE 350-001 Real Exam Questions , help you to pass exam 100%.