Re: [BUG?] protocol.version=2 sends HTTP "Expect" headers
To
Jeff King
Cc
git@vger.kernel.org
Jon Simons
Jonathan Tan
From
brian m. carlson
See Also
Prev
Date
2018-11-01 00:48:04 UTC
On Wed, Oct 31, 2018 at 12:03:53PM -0400, Jeff King wrote:
> Since 959dfcf42f (smart-http: Really never use Expect: 100-continue,
> 2011-03-14), we try to avoid sending "Expect" headers, since some
> proxies apparently don't handle them well. There we have to explicitly
> tell curl not to use them.
> 
> The exception is large requests with GSSAPI, as explained in c80d96ca0c
> (remote-curl: fix large pushes with GSSAPI, 2013-10-31).
> 
> However, Jon Simons noticed that when using protocol.version=2, we've
> started sending Expect headers again. That's because rather than going
> through post_rpc(), we push the stateless data through a proxy_state
> struct. And in proxy_state_init(), when we set up the headers, we do not
> disable curl's Expect handling.
> 
> So a few questions:
> 
>   - is this a bug or not? I.e., do we still need to care about proxies
>     that can't handle Expect? The original commit was from 2011. Maybe
>     things are better now. Or maybe that's blind optimism.
> 
>     Nobody has complained yet, but that's probably just because v2 isn't
>     widely deployed yet.

HTTP/1.1 requires support for 100 Continue on the server side and in
proxies (it is mandatory to implement).  The original commit disabling
it (959dfcf42f ("smart-http: Really never use Expect: 100-continue",
2011-03-14)), describes proxies as the reason for disabling it.

It's my understanding that all major proxies (including, as of version
3.2, Squid) support HTTP/1.1 at this point.  Moreover, Kerberos is more
likely to be used in enterprises, where proxies (especially poorly
behaved and outright broken proxies) are more common.  We haven't seen
any complaints about Kerberos support in a long time.  This leads me to
believe that things generally work[0].

Finally, modern versions of libcurl also have a timeout after which they
assume that the server is not going to respond and just send the request
body anyways.  This makes the issue mostly moot.

>   - alternatively, should we just leave it on for v2, and provide a
>     config switch to disable it if you have a crappy proxy? I don't know
>     how widespread the problem is, but I can imagine that the issue is
>     subtle enough that most users wouldn't even know.

For the reasons I mentioned above, I'd leave it on for now.  Between
libcurl and better proxy support, I think this issue is mostly solved.
If we need to fix it in the future, we can, and people can fall back to
older protocols via config in the interim.

[0] In some environments, people use SSH because the proxy breaks
everything that looks like HTTP, but that's beyond the scope of this
discussion.
-- 
brian m. carlson: Houston, Texas, US
OpenPGP: https://keybase.io/bk2204
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.2.10 (GNU/Linux)

iQIzBAABCgAdFiEEX8OngXdrJt+H9ww3v1NdgR9S9osFAlvaTUMACgkQv1NdgR9S
9os6ig//eWWgWvhUXmTnYsMh3V8bVHecgoOE1sZRQgp2Fqb6tBFiwGeqCaPWL0Ih
BNzKAd9/c3FRDHZppO9DseqlEo63dejF3YaJuEVtWh4zVZix/QTDw7dmKTomon9P
07TMuLj+MSCLU0u4rMxsgNNG5CTLgUvcQyjrmK8+K7y8QSy7QK4s5njZHVnMzDFx
Iitfo6oUIc3oMV7IqQqoD8VW3x31rpiu/x17RSUeArD9y73QKSY7oh0iwUgSlLNy
d0ci2Ld/Z5vIpaoSIkYnU/EBwWwh5JjEgEm6/3YUyfOo8ovkSOjTFz6msy2p/jfA
rLmbzuLvfISFUu305jShg02ALWtob2cSAGqNzskYWSavmNGknCiddAfwDywHptz0
HHP6X9jCxHZg6YDv/Ot/Aepn83xUYaE4iKhNB0E+VUN2FUZ3cIvM6K7i92s1mb7B
rznLsjyzwEu2ILY5uNJkIis/hlBhb1t6yuh5h9ZinwF8Mq6gySS6IwVk+QGl9v6Z
93rZFf2hRb53eLKkrCrvWsjazXiAXlYgWDkVM2LD0v2JjXHCY6HXWqUgIzxlhNW5
u2KyjVwknxHOIB40GZla2LTxN4m7Y5vVks6IGhATlm2zL4IoGHRapFe7CQbQTnTd
VjQKo2+mxwaxQuYRb95c1cbZg6ZYzReXbP+xrr1bXTwcxz1/dO0=
=/vMP
-----END PGP SIGNATURE-----