The Internet is all about sharing capacity between multiple users - that's what packet multiplexing is all about. But the IETF (and the wider industry) is realising we really don't understand how best to share out capacity. Many ISPs now override the way TCP shares out capacity. The blocks and throttles resulting from this arms race are causing bizarre feature interactions and random black holes. In Nov 2008, the Transport Area of the IETF asked the IRTF Congestion Control Research Group (ICCRG) to address this challenge. This BoF proposal brings together those who want a working group to experiment on one of the more promising approaches: congestion exposure. The aim is not to pre-empt the ICCRG, but to invetigate practical issues through experimental implementation and deployment. Congestion Exposure? The premise: capacity sharing is hard because the information needed to share capacity properly isn't visible at the internetwork layer. Specifically, the idea is for the sender to mark the outermost IP header of each packet to reveal congestion expected over the rest of the path. A protocol called re-ECN (re-inserted explicit congestion notification) has been proposed to do this. Re-ECN is the strongest candidate for adoption, but the proposed w-g might find it needs redesign, or it may adopt an alternative if one surfaces. Whatever the precise protocol, the aim is for an ISP to be able to count the volume of congestion about to be caused by an aggregate of traffic, as easily as it can count the volume of bytes - but instead it just counts the volume of marked packets. An ISP could do this for each attached user, or for whole attached networks. There is no intent to change the well-established approach where congestion is detected and responded to by transports on endpoints (e.g. congestion control in TCP or RTP/RTCP). But network operators should be able to see the congestion too. Once ISPs can see congestion, they can discourage users from causing large volumes of congestion. And they can discourage other networks from allowing their users to cause congestion. In a nutshell, this is about "accountability for causing congestion" - in both directions - holding users accountable for sending too much traffic and holding networks accountable for providing too little capacity. Because congestion isn't currently visible to ISPs, they have to resort to their own piecemeal ways to limit congestion, e.g. volume capping, fair queuing and deep packet inspection (DPI). The proposed working group will explain clearly why these techniques are poor imitations of what's really needed - congestion limiting. These piecemeal approaches unnecessarily limit what users do want (volume) while only weakly limiting what they don't want (congestion). Congestion is the precise factor that causes grief to users, so we should reveal it. Then it can be dealt with. We shouldn't complain that ISPs are violating the Internet architecture with deep packet inspection etc. if we (the IETF) don't provide a better alternative. The history of firewalls and NATs shows that we need to provide timely protocol support for good 'semi-permeable membranes' between users. Otherwise, the industry has no choice but to build bad impermeable walls. If we don't multiplex capacity properly, we shouldn't be surprised if the Internet becomes increasingly carved up into circuits. For the avoidance of doubt, when we talk of congestion, it doesn't imply any impairment. The proposed approach builds on explicit notification of congestion (ECN [RFC3168]). This is purely a warning of approaching congestion. Congestion exposure can add incentives to keep congestion low - this keeps queues short with minimal actual congestion delay or loss.