Be   ye   not  afraid... I have reviewed this document as part of the security directorate's ongoing effort to review all IETF documents being processed by the IESG.  These comments were written primarily for the benefit of the security area directors.  Document editors and WG chairs should treat these comments just like any other last call comments. Version reviewed:  draft-ietf-aqm-pie-06  -  PIE: A Lightweight Control Scheme To Address the Bufferbloat Problem Summary: LGTM, Security AD attention  not  required. Details: No major issues, some nits. Feel free to integrate or ignore. Notes:  >5 authors. Many of the authors are experienced, so they probably already know the RFC Ed might make sad face at you. Comments in [O]riginal, [P]roposed, [R]eason. Abstract Bufferbloat is a phenomenon where excess buffers in the network cause [O] phenomenon where excess [P] phenomenon in which excess [R] grammar high latency and jitter. As more and more interactive applications [SNIP]  There is a pressing need to design intelligent queue management schemes that can control latency and jitter; and hence provide desirable quality of service to users. [O] jitter; and hence [P] jitter, and hence [R] grammar [SNIP]  The design does not require per-packet timestamp, so it incurs very small overhead and is [O] require per-packet timestamp, [P] require per-packet timestamps, [R] grammar [O] incurs very small overhead [P] incurs very little overhead [R] word choice 1. Introduction The explosion of smart phones, tablets and video traffic in the Internet brings about a unique set of challenges for congestion control. To avoid packet drops, many service providers or data center operators require vendors to put in as much buffer as possible. With rapid decrease in memory chip prices, these requests are easily [O] With rapid decrease in memory chip price, [P] Because of the rapid decrease in memory chip prices, accommodated to keep customers happy. While this solution succeeds in assuring low packet loss and high TCP throughput, it suffers from a major downside. The TCP protocol continuously increases its sending rate and causes network buffers to fill up. TCP cuts its rate only when it receives a packet drop or mark that is interpreted as a congestion signal. However, drops and marks usually occur when network buffers are full or almost full. As a result, excess buffers, initially designed to avoid packet drops, would lead to highly elevated queueing latency and jitter. It is a delicate balancing act to design a queue management scheme that not only allows short-term burst to smoothly pass, but also controls the average latency in the presence of long-running greedy flows. [O] It is a delicate balancing act to design a queue management scheme that not only allows short-term burst to smoothly pass, but also controls the average latency in the presence of long-running greedy flows. [P] Designing a queue management scheme that allows short-term bursts to pass smoothly as well as controlling the average latency in the presence of long-running greedy flows is a delicate balancing act. [R] readability [SNIP]  New algorithms are beginning to emerge to control queueing latency directly to address the bufferbloat problem [CoDel]. Along these lines, PIE also aims to keep the benefits of RED: such as easy [O] the benefits of RED: such as easy [P] the benefits of RED, including easy [R] readability and grammar implementation and scalability to high speeds. Similar to RED, PIE randomly drops an incoming packet at the onset of the congestion. The congestion detection, however, is based on the queueing latency instead of the queue length like RED.  [SNIP]  In October 2013, CableLabs' DOCSIS 3.1 specification [DOCSIS_3.1] mandated that cable modems implement a specific variant of the PIE design as the active queue management algorithm. In addition to cable specific improvements, the PIE design in DOCSIS 3.1 [DOCSIS-PIE] has improved the original design in several areas, including de- randomization of coin tosses and enhanced burst protection. This draft separates the PIE design into the basic elements that are MUST to be implemented and optional SHOULD/MAY enhancement elements. [O] that are MUST to be implemented [R] This is awkward, but maybe should be left as is? Or delete "are"? 4. The Basic PIE Scheme As illustrated in Fig. 1, PIE conceptually comprises three simple MUST [O] PIE conceptually comprises three simple [P] PIE is comprised of three simple [R] I think this is what is meant -- PIE is made of three MUST components, not PIE makes those three components? components: a) random dropping at enqueueing; b) periodic drop probability update; c) latency calculation.  [SNIP]  5.2 Departure Rate Estimation One way to calculate latency is to obtain the departure rate. The draining rate of a queue in the network often varies either because other queues are sharing the same link, or the link capacity fluctuates. [O] because other queues are sharing the same link, or the link capacity fluctuates. [P] because other queues are sharing the link or because the link capacity fluctuates. [R] clarity 9. Security Considerations This document describes an active queue management algorithm based on implementations in Cisco products. This algorithm introduces no specific security exposures. 10. IANA Considerations There are no actions for IANA. -- END --