Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XHTTP client: Add gRPC header to "stream-up" mode by default #4042

Merged
merged 2 commits into from
Nov 21, 2024
Merged

Conversation

RPRX
Copy link
Member

@RPRX RPRX commented Nov 21, 2024

https://t.me/projectXtls/501 #4038 (comment)

这个 PR 给 stream-up 模式的上行 POST 请求默认加上了 Content-Type: application/grpc 的 header,伪装成 gRPC 以穿透一些本来会缓存上行请求、用不了流式上行的中间盒,经测试 CF H2 支持、H3 不支持,而 CFT H2、H3 均无需这个伪装

XHTTP stream-up 模式已经可以取代传统的 gRPC 传输层,前者的优势主要有:

  • 前者无需任何 gRPC 库,性能更好
  • 前者的下行流量是独立的 GET 请求,不会受到 CDN 对 gRPC 的流量限制
  • 前者还有 header padding、XMUX上下行分离 等增强,且已经引入了 extra 机制,所有参数均可分享,更成熟

当然,若你不想把它伪装成 gRPC,可以在客户端设置 "noGRPCHeader": true,就像服务端的 noSSEHeader

此外,这个 PR 修改了客户端 "auto":TLS H2 或 REALITY 时 stream-up,否则 packet-up,所以有时你需要手动选 packet-up

未来可能会加个 "stream-full" 模式,即同一个 HTTP 请求同时承载上下行、完全不分离,类似于现有的 HTTP 传输层

这次有没有又开启了一个崭新的时代不重要,支持一下 Project X NFT 非常重要:Announcement of NFTs by Project X #3633

@RPRX
Copy link
Member Author

RPRX commented Nov 21, 2024

我自测试stream-up,是h2+grpc header可以过cf,h3过不了🧐
cft则是面板开启stream模式,h2和h3都行,也不需要这个伪装🙈

我試了xhttp stream up 用 h2可以過 cloudflare(需要加 grpc header)以前的 http協議只要加 grpc header 也可以,h3沒試過

感谢 O⁠_⁠o $_$king chen 的反馈,我会据此进行修改,此外我感觉 HTTP method 是 POST 还是其它的什么应该不影响 H3 吧

@RPRX
Copy link
Member Author

RPRX commented Nov 21, 2024

Yep,for the first time,splithttp upload speedtest did not fail

其实 2d7b0e8 已经修好了 packet-up 的上行问题,你用 main 也不会 fail

@Fangliding
Copy link
Member

Fangliding commented Nov 21, 2024

就是给加个content type的模式有必要么 顺便昨天那个pr为啥没合 还打算改吗

@RPRX
Copy link
Member Author

RPRX commented Nov 21, 2024

就是给加个content type的模式有必要么

因为想改为默认 fakegrpc-up,最好是有一个新的模式名,现在打算改成 TLS/REALITY H2 时默认 fakegrpc-up,否则 packet-up

看来 SplitHTTP H3 开启的新时代仍有重要意义

顺便昨天那个pr为啥没合 还打算改吗

等下就合,不过我给 XHTTP 加了 "stream-full" 模式后,HTTP 传输层就彻底能被取代了,这下一个 PR 撸掉了两个传输层

@RPRX
Copy link
Member Author

RPRX commented Nov 21, 2024

To j2rong:CDN 对 gRPC 的流量限制确实是需要考虑的因素,这么说的话还是先不弄 "fakegrpc-full",保持上下行逻辑性分离吧

To jiang zhexin:严格来说 H1 支持流式上行,但实际情况是有些实现不支持,比如 Nginx 回源时,所以我说的是 H1 不太支持

@RPRX
Copy link
Member Author

RPRX commented Nov 21, 2024

就是给加个content type的模式有必要么

对了,配置那里加 grpc header 的话有个缺点是下行也会被加上,然后 CDN 可能对 gRPC 有流量限制,所以还得是新模式

打算更新下这个 PR 就发版,这次的 fakegrpc 上行加正常下行,配合 header padding、xmux 等,可以说是直接扬了 gRPC 传输层

@RPRX
Copy link
Member Author

RPRX commented Nov 21, 2024

Nginx默认是HTTP/1.0,手动设置成1.1就行🙈🙈

感谢 jiang zhexin 提供的重要信息,已更新 #3994 的说明

@RPRX
Copy link
Member Author

RPRX commented Nov 21, 2024

加了个content type 可以更加滥用cf了 又要开启了新时代了

对 cf 来说 grpc 标头后跑啥都没区别,平替其实没给它加“滥用”,只是我们不必带着 grpc 库玩了,主要是性能优化,不太算新时代

@RPRX
Copy link
Member Author

RPRX commented Nov 21, 2024

To jiang zhexin:所谓的性能优化其实不是优化 gRPC 的性能,而是直接撸掉 gRPC 传输层、转用更好的 XHTTP,显然这样更彻底

@RPRX
Copy link
Member Author

RPRX commented Nov 21, 2024

在平行世界中,鸭鸭直接给 HTTP 传输层加上了 grpc 标头,不曾出现过 gRPC 传输层,就像 Xray 可能不会有 XGRPC 了一样

@RPRX
Copy link
Member Author

RPRX commented Nov 21, 2024

决定简单一点,直接给 stream-up 加 Content-Type: application/grpc,然后加个 noGRPCHeader 选项,就像服务端加的 SSE

@RPRX RPRX changed the title XHTTP client: Add "fakegrpc-up" mode XHTTP client: Add gRPC header to "stream-up" mode by default Nov 21, 2024
@RPRX RPRX merged commit 817fa72 into main Nov 21, 2024
36 checks passed
@RPRX RPRX deleted the xhttp branch November 21, 2024 07:17
@gubiao

This comment was marked as resolved.

@xqzr
Copy link
Contributor

xqzr commented Nov 21, 2024

proxy_http_version 1.1;

HTTP/1.1 支持 stream 吗?

@gubiao
Copy link

gubiao commented Nov 21, 2024

proxy_http_version 1.1;

HTTP/1.1 支持 stream 吗?

去掉就是 HTTP 1.0, Nginx 到后端只支持 HTTP 1.0 和 HTTP 1.1 且默认是 HTTP 1.0

@RPRX
Copy link
Member Author

RPRX commented Nov 21, 2024

@gubiao 改成 grpc_pass 试试

@gubiao

This comment was marked as resolved.

@gubiao
Copy link

gubiao commented Nov 21, 2024

Nginx 改成:

grpc_pass unix:/opt/xray/h2.sock;

同时在 CF 面板中开启 GRPC 支持貌似可以了,虽然是通过 Content-Type 伪装的 GRPC 但 CF 面板中的 GRPC 仍然必须开启才行。

@RPRX
Copy link
Member Author

RPRX commented Nov 21, 2024

虽然是通过 Content-Type 伪装的 GRPC 但 CF 面板中的 GRPC 仍然必须开启才行

那不然呢,还有你试试 Nginx 改回 proxy_http_version 1.1,我感觉这样也能过

@RPRX
Copy link
Member Author

RPRX commented Nov 21, 2024

To 艾莉卡:

话说,改成这样后,为什么不直接用gRPC

因为比 gRPC 强,但凡看一下这个 PR 写了啥:#4042 (comment)

所以新版stream可以过nginx了?
就靠一个grpc头吗😂

不是靠 gRPC 头,但凡看一下上个 PR 写了啥:#3994 (comment)

@gubiao

This comment was marked as outdated.

@RPRX

This comment was marked as outdated.

@lxsq
Copy link
Contributor

lxsq commented Nov 21, 2024

To 艾莉卡:

话说,改成这样后,为什么不直接用gRPC

因为比 gRPC 强,但凡看一下这个 PR 写了啥:#4042 (comment)

所以新版stream可以过nginx了?
就靠一个grpc头吗😂

不是靠 gRPC 头,但凡看一下上个 PR 写了啥:#3994 (comment)

我的,没看到新PR😫

@gubiao 据称 proxy_http_version 1.1 也能反代 stream-up,我好奇的是,是不是加了 gRPC 头就不能被它反代了

感觉不像。我用ghcr.io/xtls/xray-core:24.11.21ghcr.io/xtls/xray-core:24.11.11跑同样的配置文件,用stream-up+TLS+h2。服务端Nginx前置,配置:

{
        # ......
        location /*** {
                # Stream Response
                proxy_cache off; # 关闭缓存
                proxy_buffering off; # 关闭缓冲
                proxy_request_buffering off;
                chunked_transfer_encoding on; # 开启分块传输编码
                keepalive_timeout 300;

                proxy_pass http://unix:/dev/shm/xray_splithttp.socket;
                proxy_http_version 1.1;
                proxy_redirect off;

                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
}

都能跑通

@RPRX
Copy link
Member Author

RPRX commented Nov 21, 2024

@lxsq 所以只要加上 proxy_request_buffering off 就行,和请求的 Content-Type 无关对吧

@fodhelper
Copy link

Is it possible to add a websocket header and use stream-up for H1 CDN (like HTTPUpgrade) ?

@xqzr
Copy link
Contributor

xqzr commented Nov 21, 2024

chunked_transfer_encoding on; # 开启分块传输编码

它默认启用
https://nginx.org/ru/docs/http/ngx_http_core_module.html#chunked_transfer_encoding

proxy_buffering off; # 关闭缓冲

Xray 服务端会响应 X-Accel-Buffering: no,所以,它变得没有意义
https://xtls.github.io/config/transports/splithttp.html#%E5%8D%8F%E8%AE%AE%E7%BB%86%E8%8A%82
https://nginx.org/ru/docs/http/ngx_http_proxy_module.html#proxy_buffering

proxy_cache off; # 关闭缓存

它默认启用
https://nginx.org/ru/docs/http/ngx_http_proxy_module.html#proxy_cache

@gubiao
Copy link

gubiao commented Nov 21, 2024

测试了一下,确实加上proxy_request_buffering off就可以用proxy_pass了,感谢。

lxsq added a commit to lxsq/Xray-examples-stream that referenced this pull request Nov 21, 2024
@ImMohammad20000
Copy link

Can you set noGRPCHeader : true
By default it make problem in some clients and reverse proxy configuration

@RPRX
Copy link
Member Author

RPRX commented Nov 25, 2024

Can you set noGRPCHeader : true
By default it make problem in some clients and reverse proxy configuration

不会设为默认 true,btw,具体会有什么问题

@ImMohammad20000
Copy link

ImMohammad20000 commented Nov 25, 2024

Can you set noGRPCHeader : true
By default it make problem in some clients and reverse proxy configuration

不会设为默认 true,btw,具体会有什么问题

IDK i have some problem to make fake-grpc work with caddy when i use stream-up mode

@fakegrpc {
                        path_regexp fakegrpc ^/(\d+).*$ # port + something random
                        header Content-Type application/grpc
        }

        handle @fakegrpc {
                reverse_proxy 127.0.0.1:{re.fakegrpc.1}
        }

I don't know what the problem is

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants