diff --git a/content/issue-3/optimizing-compilation-for-databend-zh.md b/content/issue-3/optimizing-compilation-for-databend-zh.md new file mode 100644 index 0000000..b0daa59 --- /dev/null +++ b/content/issue-3/optimizing-compilation-for-databend-zh.md @@ -0,0 +1,217 @@ +![](/static/issue-3/optimizing-compilation-for-databend/2.png) + +# 背景 + +时至今日,Databend 已经成长为一个大型、复杂、完备的数据库系统。团队维护着数十万行代码,每次发布十几个编译产物,并且还提供基于 Docker 的一些构建工具以改善开发者 / CI 的体验。 + +之前的文章介绍过 [使用 PGO 优化 Databend 二进制构建](https://www.databend.cn/blog/2023/02/24/build-databend-with-pgo) ,用户可以根据自己的工作负载去调优 Databend 的编译。再早些时候,还有一些介绍 Databend [开发环境](https://www.databend.cn/blog/setup-databend-dev-env) 和 [构建](https://www.databend.cn/blog/build-and-run-databend) 的文章。 + +对于 Databend 这样的中大型 Rust 程序而言,编译实在算不上是一件轻松的事情: + +![](/static/issue-3/optimizing-compilation-for-databend/3.png) + +- 一方面,在复杂的项目依赖和样板代码堆积之下,Rust 的编译时间显得不那么理想,前两年 [Brian Anderson 的文章](https://cn.pingcap.com/blog/rust-compilation-model-calamity) 中也提到“Rust 糟糕的编译时间”这样的描述。 +- 另一方面,为了维护构建结果,不得不引入一些技巧来维护编译流水线的稳定,这并不是一件“一劳永逸”的事情,随着 Workflow 复杂性的提高,就不得不陷入循环之中。 + +为了优化编译体验,Databend 陆陆续续做过很多优化工作,今天的文章将会和大家一同回顾 Databend 中改善编译时间的一些优化。 + +# 可观测性 + +可观测性并不是直接作用于编译优化的手段,但可以帮助我们识别当前编译的瓶颈在什么地方,从而对症下药。 + +## cargo build --timings + +这一命令有助于可视化程序的编译过程。 + +在 1.59 或更早版本时可以使用 `cargo +nightly build -Ztimings` 。 + +在浏览器中打开结果 HTML 可以看到一个甘特图,其中展示了程序中各个 crate 之间的依赖关系,以及程序的编译并行程度和代码生成量级。 +通过观察图表,我们可以决定是否要提高某一模块的代码生成单元数目,或者要不要进一步拆解以优化整个编译流程。 + +![](/static/issue-3/optimizing-compilation-for-databend/4.png) + +## cargo-depgraph + +这个工具其实不太常用,但可以拿来分析依赖关系。有助于找到一些潜在的优化点,特别是需要替换某些同类依赖或者优化 crates 组织层次的时候。 + +![](/static/issue-3/optimizing-compilation-for-databend/5.png) + +# 无痛优化,从调整配置开始 + +改善编译体验的第一步其实并不是直接对代码动手术,很多时候,只需要变更少数几项配置,就能够获得很大程度上的改善。 + +## Bump, Bump, Booooooooom + +前面提到过 Rust 团队的成员们也很早就意识到,编译时间目前还并不理想。所以编译器团队同样会有计划去不断进行针对性的优化。经常可以看到在版本更新说明中有列出对编译的一些改进工作。 + +```toml +[toolchain] +channel = "nightly-2023-03-10" +components = ["rustfmt", "clippy", "rust-src", "miri"] +``` + +另外,上游项目同样可能会随着时间的推移去改善过去不合理的设计,很多时候这些改进也最终会反映在对编译的影响上。 + +![](/static/issue-3/optimizing-compilation-for-databend/6.png) + +一个改善编译时间的最简单的优化方式就是始终跟进上游的变更,并且秉着“上游优先”的理念去参与到生态建设之中。Databend 团队从一开始就是 Rust nightly 的忠实簇拥,并且为更新工具链和依赖关系提供了简明的指导。 + +## 缓存,转角遇到 sccache + +缓存是一种常见的编译优化手段,思路也很简单,只需要把预先构建好的产物存储起来,在下次构建的时候继续拿过来用。 + +早期 Databend 使用 `rust-cache` 这个 action 在 CI 中加速缓存,获得了不错的效果。但是很遗憾,我们不得不经常手动更新 key 来清理缓存,以避免构建时的误判。而且,Rust 早期对增量构建的支持也很差劲,有那么一段时间可能会考虑如何配置流水线来进行一些权衡。 + +随着时间的推移,一切变得不同了起来。 + +```urlpreview +https://github.com/mozilla/sccache +``` + +首先是 Sccache 恢复了活力,而 OpenDAL 也成功打入其内部,成为支撑 Rust 编译缓存生态的重要组件,尽管在本地构建时使用它常常无法展现出真正的威力,但是放在 CI 中,还是能够带来很大惊喜的。 + +另一个重要的改变是,Rust 社区意识到增量编译对于 CI 来讲并不能很好 Work 。 + +> CI builds often are closer to from-scratch builds, as changes are typically much bigger than from a local edit-compile cycle. For from-scratch builds, incremental adds an extra dependency-tracking overhead. It also significantly increases the amount of IO and the size of ./target, which make caching less effective. + +# 轻装上阵,将冷气传递给每一个依赖 + +Rust 生态里面有一个很有意思的项目是 [mTvare6/hello-world.rs](https://github.com/mTvare6/hello-world.rs) ,它尽可能给你展现了如何让一个 Rust 项目变得尽可能糟糕。 + +![](/static/issue-3/optimizing-compilation-for-databend/8.png) + +特别是: + +> in a few lines of code with few(1092) dependencies + +Rust 自身是不太能很好自动处理这一点的,它需要把所有依赖一股脑下载下来编译一通。所以避免无用依赖的引入就成为一件必要的事情了。 + +最开始的时候,Databend 引入 cargo-udeps 来检查无用的依赖,大多数时候都工作很良好,但最大的缺点在于,每次使用它检查依赖就相当于要编译一遍,在 CI 中无疑是不划算的。 + +[sundy-li](https://github.com/sundy-li) 发现了另外一个快速好用的工具,叫做 cargo-machete 。 + +![](/static/issue-3/optimizing-compilation-for-databend/9.png) + +一个显著的优点是它很快,因为一切只需要简单的正则表达式来处理。而且也支持了自动修复,这意味着我们不再需要挨个检索文件再去编辑。 + +不过 machete 并不是完美的工具,由于只是进行简单的正则处理,有一些情况无法准确识别,不过 ignore 就好了,总体上性价比还是很高的。 + +## 稀疏索引 + +为了确定 crates.io 上存在哪些 crates,Cargo 需要下载并读取 crates.io-index ,该索引位于托管在 GitHub 上的 git 存储库中,其中列出了所有 crates 的所有版本。 + +然而,随着时间推移,由于索引已经大幅增长,初始获取和更新变得很慢。RFC 2789 引入了稀疏索引来改进 Cargo 访问索引的方式,并使用 https://index.crates.io/ 进行托管。 + +```toml +[registries.crates-io] +protocol = "sparse" +``` + +## linker + +如果项目比较大,而且依赖繁多,那么可能在链接时间上会比较浪费。特别是在你只改了几行代码,但编译却花了很久的时候。 + +最简单的办法就是选择比默认链接器更快的链接器。 + +![](/static/issue-3/optimizing-compilation-for-databend/10.png) + +lld 或者 mold 都可以改善链接时间,Databend 最后选择使用 mold 。其实在 Databend 这个量级的程序上,两个链接器的差距并不明显,但是,使用 mold 的一个潜在好处是能够节约一部分编译时候消耗的内存。 + +```toml +[target.x86_64-unknown-linux-gnu] +linker = "clang" +rustflags = ["-C", "link-arg=-fuse-ld=/path/to/mold"] +``` + +## 编译相关配置 + +先看一个常见的 split-debuginfo,在 MacOS 上,rustc 会运行一个名为 dsymutil 的工具,该工具会分析二进制文件,然后构建调试信息目录。配置 split-debuginfo,可以跳过 dsymutil ,从而加快构建速度。 + +```toml +split-debuginfo = "unpacked" +``` + +另外的一个例子是 codegen-units,Databend 在编译时使用 codegen-units = 1 来增强优化,并且克制二进制体积大小。但是考虑到部分依赖在编译时会有特别长的代码生成时间(因为重度依赖宏),所以需要针对性放开一些限制。 + +```toml +[profile.release.package] +arrow2 = { codegen-units = 4 } +common-functions = { codegen-units = 16 } +databend-query = { codegen-units = 4 } +databend-binaries = { codegen-units = 4 } +``` + +# 重新思考,更合理的代码组织 + +前面是一些配置上的调整,接下来将会探讨重构对代码编译时间的一些影响。 + +## 拆分到更合理的 crates 规模 + +对于一个大型的 All in One 式的 Crate 而言,拆分 crates 算是一个比较有收益的重构。一方面可以显著改善并行度。另一方面,通过解耦交叉依赖/循环依赖,可以帮助 Rust 更快地处理代码编译。 + +```urlpreview +https://github.com/datafuselabs/databend/issues/6180 +``` + +同时,还有一个潜在的好处,就是拆分以后,由于代码的边界更为清晰,维护起来也会省力一些。 + +## 单元式测试与集成式测试的界限 + +单元测试的常见组织形式包括在 src 中维护 tests mod ,和在 tests 目录下维护对应的测试代码。 + +根据 Delete Cargo Integration Tests 的建议,Databend 很早就从代码中剥离了所有的单元测试,并组织成类似这样的形式 + +``` +tests/ + it/ + main.rs + foo.rs + bar.rs +``` + +这种形式避免将 `tests/` 下面的每个文件都编译成一个单独的二进制文件,从而减轻对编译时间的影响。 + +另外,Rust 编译时处理 tests mod 和 docs tests 也需要花费大量时间,特别是 docs tests 还需要另外构建目标,在采用上面的组织形式之后,就可以在配置中关掉。 + +但是,这种形式并不十分优雅,不得不为所有需要测试的内容设置成 public ,容易破坏代码之间的模块化组织,在使用前建议进行深入评估。 + +## 更优雅的测试方法 + +对应到编译时间上,可以简单认为,单元测试里需要编译的代码越多,编译时间自然就会越慢。 + +另外,对于 Databend 而言,有相当一部分测试都是对输入输出的端到端测试,如果硬编码在单元测试中需要增加更多额外的格式相关的工作,维护也会比较费力。 + +![](/static/issue-3/optimizing-compilation-for-databend/12.png) + +Databend 巧妙运用 golden files 测试和 SQL logic 测试,替换了大量内嵌在单元测试中的 SQL 查询测试和输出结果检查,从而进一步改善了编译时间。 + +# 遗珠之憾 + +## cargo nextest + +cargo nextest 让测试也可以快如闪电,并且提供更精细的统计和优雅的视图。Rust 社区中有不少项目通过引入 cargo nextest 大幅改善测试流水线的时间。 + +![](/static/issue-3/optimizing-compilation-for-databend/13.png) + +但 Databend 目前还无法迁移到这个工具上。一方面,配置相关的测试暂时还不被支持,如果再针对去单独跑 cargo test 还要重新编译。另一方面,有一部分与超时相关的测试设定了执行时间,必须等待执行完成。 + +## cargo hakari + +改善依赖项的编译,典型的例子其实是 workspace-hack ,将重要的公共依赖放在一个目录下,这样这些依赖就不需要反复编译了。Rust 社区中的 cargo-hakari,可以用来自动化管理 workspace-hack 。 + +![](/static/issue-3/optimizing-compilation-for-databend/14.png) + +Databend 这边则是由于有大量的 common 组件,主要二进制程序都建立在 common 组件上,暗中符合这一优化思路。另外,随着 workspace 支持依赖继承之后,维护压力也得到减轻。 + +# 总结 + +这篇文章介绍了 Databend 团队在改善 Rust 项目编译时间上做的一些探索和努力,从配置优化和代码重构这两个角度,提供了一些能够优化编译时间的一些建议。 + +# 参考资料 + +- [Fast Rust Builds](https://matklad.github.io/2021/09/04/fast-rust-builds.html) +- [Delete Cargo Integration Tests](https://matklad.github.io/2021/02/27/delete-cargo-integration-tests.html) +- [Better support of Docker layer caching in Cargo](https://hackmd.io/@kobzol/S17NS71bh) +- [2023-04: 为什么你该试试 Sccache?](https://xuanwo.io/reports/2023-04/) +- [The Rust Performance Book - Compile Times](https://nnethercote.github.io/perf-book/compile-times.html) +- [Cargo Registry 稀疏索引的一些介绍](https://blog.dcjanus.com/posts/cargo-registry-index-in-http/) diff --git a/content/issue-3/optimizing-compilation-for-databend.md b/content/issue-3/optimizing-compilation-for-databend.md new file mode 100644 index 0000000..a6a124c --- /dev/null +++ b/content/issue-3/optimizing-compilation-for-databend.md @@ -0,0 +1,209 @@ +![](/static/issue-3/optimizing-compilation-for-databend/1.png) + +# Background + +Compiling a medium to large Rust program is not a breeze due to the accumulation of complex project dependencies and boilerplate code. As noted in [an article by Brian Anderson](https://www.pingcap.com/blog/rust-compilation-model-calamity/), "But Rust compile times are so, so bad." To maintain the stability of the build pipeline, it is necessary to introduce some techniques, but there is no "one-size-fits-all" solution. As the complexity of the workflow increases, it can become a loop. + +![](/static/issue-3/optimizing-compilation-for-databend/3.png) + +The Databend team encountered similar challenges in compiling the product from hundreds of thousands of lines of code and in developing Docker-based build tools to enhance the developers/CI workflow. This article outlines the measures taken by the team to address the compilation challenges. If you're interested, check out these earlier posts to get a general idea of how we compile Databend: + +- [Building Databend](​https://databend.rs/doc/contributing/building-from-source) +- [Optimizing Databend Binary Builds with Profile-guided Optimization](https://databend.rs/blog/profile-guided-optimization) + +# Observability + +While observability may not directly optimize compilation, it can aid in identifying where the bottleneck in the compilation process lies. This knowledge can help us determine the appropriate remedy to address the issue. + +## Compilation Process + +This command visualizes the compilation process of Databend. + +In Rust version 1.59 or earlier, you can use `cargo +nightly build -Ztimings`. + +When opened in a web browser, the resulting HTML file shows a Gantt chart displaying the dependency relationships between crates in the program, the degree of parallelism in compilation, and the order of magnitude of code generation. + +Based on the chart, we can decide whether to increase the number of code generation units for a particular module, or whether to further decompose to optimize the overall build process. + +![](/static/issue-3/optimizing-compilation-for-databend/4.png) + +## Dependent Relationships + +Although not commonly utilized, [cargo-depgraph](https://crates.io/crates/cargo-depgraph) can be employed to analyze dependent relationships. It helps to find potential optimization points, especially when you need to replace some similar dependencies or optimize the organization level of crates. + +![](/static/issue-3/optimizing-compilation-for-databend/5.png) + +# Painless Optimization with Configuration Adjustments + +The first step to improving the compilation experience does not involve directly altering the code. In many cases, only a few configuration adjustments are necessary to achieve significant improvement. + +## Always Bump & Upstream First + +As mentioned earlier, members of the Rust team were also early on aware that compile times are currently suboptimal. Therefore, the Databend team has plans to continually optimize for this issue. Improvements to compilation can often be found listed in the version update notes. + +```toml +[toolchain] +channel = "nightly-2023-03-10" +components = ["rustfmt", "clippy", "rust-src", "miri"] +``` + +In addition, upstream projects may also improve unreasonable designs over time, and many of these improvements will ultimately be reflected in the impact on compilation. + +![](/static/issue-3/optimizing-compilation-for-databend/6.png) + +One of the simplest ways to improve compile time is to always keep up with upstream changes and participate in ecosystem building with the philosophy of "upstream first". Databend has been a loyal follower of Rust nightly from the very beginning and provided [concise guidance](https://databend.rs/doc/contributing/routine-maintenance) for updating the toolchain and dependency relationships. + +## Caching + +Caching is a common compilation optimization technique. The idea is simple: store pre-built artifacts and reuse them the next time you build. + +Initially, Databend employed the rust-cache action in CI to improve caching and achieved promising results. However, we had to manually update the key frequently to clear the cache and prevent misjudgment during the build. + +Moreover, Rust's early support for incremental builds was terrible. For a while, we had to consider how to configure the pipeline to make some trade-offs. + +Things have now changed. + +```urlpreview +https://github.com/mozilla/sccache +``` + +[Sccache](https://github.com/mozilla/sccache) was revitalized and [OpenDAL](https://github.com/apache/incubator-opendal) was successfully integrated into it, becoming a crucial component that supports the Rust compilation cache ecosystem. Although it may not fully showcase its potential when building locally, it can still deliver great results in CI. + +Another important change is that the Rust community realized that incremental compilation did not work well for CI. + +> CI builds often are closer to from-scratch builds, as changes are typically much bigger than from a local edit-compile cycle. For from-scratch builds, incremental adds an extra dependency-tracking overhead. It also significantly increases the amount of IO and the size of ./target, which make caching less effective. ([Fast Rust Builds](https://matklad.github.io/2021/09/04/fast-rust-builds.html)) + +# Remove Unused Dependencies + +There is an interesting project in the Rust ecosystem known as [mTvare6/hello-world.rs](https://github.com/mTvare6/hello-world.rs), which demonstrates how to create a Rust project that is as poorly written as possible. + +![](/static/issue-3/optimizing-compilation-for-databend/8.png) + +In particular: + +> in a few lines of code with few(1092) dependencies + +Rust itself is not very good at automatically handling dependencies. It always downloads and compiles all dependencies in one go. Therefore, avoiding unnecessary introduction of dependencies becomes essential. + +At first, Databend introduced [cargo-udeps](https://crates.io/crates/cargo-udeps) to check for unused dependencies. Most of the time it worked well. However, the major drawback was that every time dependencies were checked, it was equivalent to recompiling, which was undoubtedly inefficient in a CI environment. + +[sundy-li](https://github.com/sundy-li) found another fast and easy to use tool called [cargo-machete](https://crates.io/crates/cargo-machete). + +![](/static/issue-3/optimizing-compilation-for-databend/9.png) + +One significant benefit is that **machete** is fast as it only requires simple regular expressions to handle everything. Additionally, it supports automatic fixes, eliminating the need to search through files one by one and make manual edits. + +However, **machete** is not a flawless tool. Due to its reliance on simple regular expression processing, it may not accurately identify some situations, but it is acceptable to ignore these instances. + +## Sparse Index + +In order to determine which crates exist on [crates.io](https://crates.io/), Cargo needs to download and read the crates.io-index, which is located in a git repository hosted on GitHub and lists all versions of all crates. + +However, as the index has grown significantly over time, the initial acquisition and updates have become painfully slow. + +[RFC 2789](https://rust-lang.github.io/rfcs/2789-sparse-index.html) introduced a sparse index to improve Cargo's access to the index and is hosted at [https://index.crates.io/](https://index.crates.io/). + +```toml +[registries.crates-io] +protocol = "sparse" +``` + +## Linker + +If a project is relatively large and has many dependencies, it may waste a lot of time on linking. Few code changes may lead to a long compile time. + +The simplest solution is to choose a faster linker than the default one. + +![](/static/issue-3/optimizing-compilation-for-databend/10.png) + +Both [lld](https://github.com/llvm/llvm-project/tree/main/lld) and [mold](https://github.com/rui314/mold) can improve link time. Databend eventually chose to use **mold**. In fact, the difference between the two linkers is not obvious for Databend. However, using **mold** has a potential benefit of saving some memory consumption during compilation. + +```toml +[target.x86_64-unknown-linux-gnu] +linker = "clang" +rustflags = ["-C", "link-arg=-fuse-ld=/path/to/mold"] +``` + +## Compile-related Profile + +First look at a common setting: [split-debuginfo](https://doc.rust-lang.org/rustc/codegen-options/index.html#split-debuginfo). + +On macOS, rustc runs a tool called [dsymutil](https://llvm.org/docs/CommandGuide/dsymutil.html) which analyzes the binary and then builds a debug information directory. Configuring `split-debuginfo` skips **dsymutil** and speeds up the build. + +```toml +split-debuginfo = "unpacked" +``` + +Another example is [codegen-units](https://doc.rust-lang.org/rustc/codegen-options/index.html#codegen-units). + +Databend uses `codegen-units = 1` during compilation to enhance optimization and restrain the size of binaries. However, considering that some dependencies have particularly long code generation time during compilation (due to heavy macro dependencies), it is necessary to loosen some restrictions specifically. + +```toml +[profile.release.package] +arrow2 = { codegen-units = 4 } +common-functions = { codegen-units = 16 } +databend-query = { codegen-units = 4 } +databend-binaries = { codegen-units = 4 } +``` + +# More Reasonable Code Structures + +The above are some configuration adjustments. Next, we will explore the impact of refactoring on compile time. + +## Split into More Reasonable Crate Sizes + +Refactoring a large all-in-one crate into smaller ones can be a highly beneficial strategy. It can not only improve parallelism, but also help Rust process code compilation faster by decoupling cross dependencies and circular dependencies. + +```urlpreview +https://github.com/datafuselabs/databend/issues/6180 +``` + +Splitting crates also makes the boundaries of the code more apparent, which can result in easier maintenance. + +## The Boundary between Unit Testing and Integration Testing + +Common forms of unit test organization include maintaining `tests` mod in `src` and maintaining corresponding test code in the `tests` directory. + +Following the recommendation of [Delete Cargo Integration Tests](https://matklad.github.io/2021/02/27/delete-cargo-integration-tests.html), Databend has stripped all unit tests from the code very early and organized them in a similar form: + +``` +tests/ + it/ + main.rs + foo.rs + bar.rs +``` + +This form avoids compiling each file under `tests/` into some separate binary files, thereby reducing the impact on compile time. + +In addition, Rust spends a lot of time processing tests mod and docs tests during compilation, especially docs tests which require building additional targets. After adopting the above organization form, they can be turned off in the configuration. + +However, this form is not elegant enough for us. All contents that need to be tested have to be set as public, which easily breaks the modular organization of the code. In-depth evaluation is recommended before use. + +## More Elegant Testing Methods + +We all know that the more code that needs to be compiled for unit tests, the slower the compilation time will be. + +In addition, for Databend, a considerable part of the tests are end-to-end tests of input and output. If these tests are hardcoded in unit tests, much more format-related work needs to be added, which also requires substantially more effort to maintain. + +![](/static/issue-3/optimizing-compilation-for-databend/12.png) + +The use of golden file testing and SQL logic testing in Databend replaces a large number of SQL query tests and output result checks embedded in unit tests, which further improves compile time. + +# Cargo Snubs + +## cargo-nextest + +[cargo nextest](https://nexte.st/) makes testing as fast as lightning and provides finer statistics and elegant views. Many projects in the Rust community have greatly improved test pipeline time by introducing cargo nextest. + +![](/static/issue-3/optimizing-compilation-for-databend/13.png) + +However, Databend is currently unable to switch to this tool for two reasons. Firstly, configuration-related tests are not currently supported, so if you need to run cargo test separately, you have to recompile. Secondly, some tests related to timeouts are set to a specific execution time and must wait for completion. + +## cargo-hakari + +One typical example of improving the compilation of dependencies is workspace-hack, which places important public dependencies in a directory, avoiding the need to repeatedly recompile these dependencies. [cargo-hakari](https://crates.io/crates/cargo-hakari) can be used to automatically manage workspace-hack. + +![](/static/issue-3/optimizing-compilation-for-databend/14.png) + +Databend has a large number of common components, and the main binary programs are built on common components, implicitly in line with this optimization idea. In addition, with the support of dependencies inheritance in the workspace, the maintenance pressure has also been reduced. diff --git a/content/issue-3/zine.toml b/content/issue-3/zine.toml index 8743ef5..e0ecef1 100644 --- a/content/issue-3/zine.toml +++ b/content/issue-3/zine.toml @@ -39,6 +39,17 @@ pub_date = "2023-04-05" publish = true featured = true +[[article]] +file = "optimizing-compilation-for-databend.md" +title = "Optimizing Compilation for Databend" +author = ["PsiACE", "databend"] +topic = ["database", "optimization"] +pub_date = "2023-04-20" +publish = true +featured = true +canonical = "https://databend.rs/blog/2023/04/20/optimizing-compilation-for-databend" +i18n.zh = { file = "optimizing-compilation-for-databend-zh.md", title = "Databend 中的 Rust 编译时间优化小技巧"} + [[article]] file = "task-stats-alloc.md" title = "TaskStatsAlloc: Fine-grained memory statistics in Rust" diff --git a/static/avatar/databend.svg b/static/avatar/databend.svg new file mode 100644 index 0000000..565ac15 --- /dev/null +++ b/static/avatar/databend.svg @@ -0,0 +1 @@ +Databend LOGO \ No newline at end of file diff --git a/static/avatar/psiace.jpg b/static/avatar/psiace.jpg new file mode 100644 index 0000000..e69d2e8 Binary files /dev/null and b/static/avatar/psiace.jpg differ diff --git a/static/issue-3/optimizing-compilation-for-databend/1.png b/static/issue-3/optimizing-compilation-for-databend/1.png new file mode 100644 index 0000000..abd5a78 Binary files /dev/null and b/static/issue-3/optimizing-compilation-for-databend/1.png differ diff --git a/static/issue-3/optimizing-compilation-for-databend/10.png b/static/issue-3/optimizing-compilation-for-databend/10.png new file mode 100644 index 0000000..ca95240 Binary files /dev/null and b/static/issue-3/optimizing-compilation-for-databend/10.png differ diff --git a/static/issue-3/optimizing-compilation-for-databend/12.png b/static/issue-3/optimizing-compilation-for-databend/12.png new file mode 100644 index 0000000..7eb0c6a Binary files /dev/null and b/static/issue-3/optimizing-compilation-for-databend/12.png differ diff --git a/static/issue-3/optimizing-compilation-for-databend/13.png b/static/issue-3/optimizing-compilation-for-databend/13.png new file mode 100644 index 0000000..478c8d9 Binary files /dev/null and b/static/issue-3/optimizing-compilation-for-databend/13.png differ diff --git a/static/issue-3/optimizing-compilation-for-databend/14.png b/static/issue-3/optimizing-compilation-for-databend/14.png new file mode 100644 index 0000000..54f9386 Binary files /dev/null and b/static/issue-3/optimizing-compilation-for-databend/14.png differ diff --git a/static/issue-3/optimizing-compilation-for-databend/2.png b/static/issue-3/optimizing-compilation-for-databend/2.png new file mode 100644 index 0000000..995b109 Binary files /dev/null and b/static/issue-3/optimizing-compilation-for-databend/2.png differ diff --git a/static/issue-3/optimizing-compilation-for-databend/3.png b/static/issue-3/optimizing-compilation-for-databend/3.png new file mode 100644 index 0000000..6979e3c Binary files /dev/null and b/static/issue-3/optimizing-compilation-for-databend/3.png differ diff --git a/static/issue-3/optimizing-compilation-for-databend/4.png b/static/issue-3/optimizing-compilation-for-databend/4.png new file mode 100644 index 0000000..a3ef5bf Binary files /dev/null and b/static/issue-3/optimizing-compilation-for-databend/4.png differ diff --git a/static/issue-3/optimizing-compilation-for-databend/5.png b/static/issue-3/optimizing-compilation-for-databend/5.png new file mode 100644 index 0000000..18f2ef4 Binary files /dev/null and b/static/issue-3/optimizing-compilation-for-databend/5.png differ diff --git a/static/issue-3/optimizing-compilation-for-databend/6.png b/static/issue-3/optimizing-compilation-for-databend/6.png new file mode 100644 index 0000000..16d58b1 Binary files /dev/null and b/static/issue-3/optimizing-compilation-for-databend/6.png differ diff --git a/static/issue-3/optimizing-compilation-for-databend/8.png b/static/issue-3/optimizing-compilation-for-databend/8.png new file mode 100644 index 0000000..06dd250 Binary files /dev/null and b/static/issue-3/optimizing-compilation-for-databend/8.png differ diff --git a/static/issue-3/optimizing-compilation-for-databend/9.png b/static/issue-3/optimizing-compilation-for-databend/9.png new file mode 100644 index 0000000..876a256 Binary files /dev/null and b/static/issue-3/optimizing-compilation-for-databend/9.png differ diff --git a/zine-data.json b/zine-data.json index 801b4bb..8df233a 100644 --- a/zine-data.json +++ b/zine-data.json @@ -25,11 +25,21 @@ "A GraphQL server library implemented in Rust. Contribute to async-graphql/async-graphql development by creating an account on GitHub.", "https://opengraph.githubassets.com/a3613d90408005e2e1b9323a30ba15151184685a902bfa501fa7d1bb67d04e53/async-graphql/async-graphql" ], + "https://github.com/datafuselabs/databend/issues/6180": [ + "split databend-query to multiple crates · Issue #6180 · datafuselabs/databend · GitHub", + "Summary Now Databend-Query is a rather large crate and we had to waste a lot of time on compiling and linking. It's time to split it up into a few relatively small crates. Related #4399", + "https://opengraph.githubassets.com/855cd62b49753184fd23c301b3da40402c443696013761c9ec887426aea4e548/datafuselabs/databend/issues/6180" + ], "https://github.com/egraphs-good/egg": [ "GitHub - egraphs-good/egg: egg is a flexible, high-performance e-graph library", "egg is a flexible, high-performance e-graph library - GitHub - egraphs-good/egg: egg is a flexible, high-performance e-graph library", "https://opengraph.githubassets.com/f583a5edadcea6c19dfc54b6e93d2235480606c8ca18ac9d9b1cbaaef0484f06/egraphs-good/egg" ], + "https://github.com/mozilla/sccache": [ + "GitHub - mozilla/sccache: sccache is ccache with cloud storage", + "sccache is ccache with cloud storage. Contribute to mozilla/sccache development by creating an account on GitHub.", + "https://opengraph.githubassets.com/b01339bd81ff2a776c3e5f704c0e86d73f5ae7f17c5c9fb8ae8f7e51a95d8959/mozilla/sccache" + ], "https://github.com/poem-web/poem": [ "GitHub - poem-web/poem: A full-featured and easy-to-use web framework with the Rust programming language.", "A full-featured and easy-to-use web framework with the Rust programming language. - GitHub - poem-web/poem: A full-featured and easy-to-use web framework with the Rust programming language.", diff --git a/zine.toml b/zine.toml index d71108c..4e1ff67 100644 --- a/zine.toml +++ b/zine.toml @@ -143,6 +143,23 @@ bio = """ - [GitHub](https://github.com/Boshen) """ +[authors.PsiACE] +name = "PsiACE" +avatar = "/static/avatar/psiace.jpg" +bio = """ +- [GitHub](https://github.com/psiace) +""" + +[authors.databend] +name = "Databend" +avatar = "/static/avatar/databend.svg" +team = true +bio = """ +[Databend](https://github.com/datafuselabs/databend/) is a modern cloud data warehouse focusing on reducing cost and complexity for your massive-scale analytics needs. Open source alternative to Snowflake. + +Also available in the cloud: +""" + [authors.tennyzhuang] name = "tennyzhuang" bio = """