Skip to content

Commit

Permalink
Merge pull request #166 from prphrl/docs
Browse files Browse the repository at this point in the history
Update docs and add Sphinx configuration key.
  • Loading branch information
JianwuZheng413 authored Jan 14, 2025
2 parents 697f9a2 + e96e593 commit 08a27bc
Show file tree
Hide file tree
Showing 7 changed files with 22 additions and 75 deletions.
61 changes: 0 additions & 61 deletions docs/source/cheatsheet/dataset_cheatsheets.rst

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,14 @@ GNN Cheatsheet
- The Heterogeneous Graph Attention Network (HAN) model, as introduced in the `"Heterogeneous Graph Attention Network" <https://arxiv.org/abs/1903.07293>`__ paper.
* - OGC
- The OGC method from the `"From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited" <https://arxiv.org/abs/2309.13599>`__ paper.
* - HGT
- The Heterogeneous Graph Transformer (HGT) model, as introduced in the `"Heterogeneous Graph Transformer" <https://arxiv.org/abs/2003.01332>`__ paper.
* - SAGE
- The SAGE model, as introduced in the `"Inductive Representation Learning on Large Graphs" <https://arxiv.org/abs/1706.02216>`__ paper.
* - RECT_L
- The RECT model, or more specifically its supervised part RECT-L, from the `"Network Embedding with Completely-imbalanced Labels" <https://arxiv.org/abs/2007.03545>`__ paper.
* - Label-Free-GNN
- Two classical methods (Random and VertexCover) and PS-FeatProp-W from the `"Label-free Node Classification on Graphs with Large Language Models (LLMS)" <https://arxiv.org/abs/2310.04668>`__ paper.
- Two classic methods (Random and VertexCover) and PS-FeatProp-W from the `"Label-free Node Classification on Graphs with Large Language Models (LLMS)" <https://arxiv.org/abs/2310.04668>`__ paper.
* - TAPE
- The TAPE method from the `"Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning" <https://arxiv.org/abs/2305.19523>`__ paper.

Expand All @@ -36,6 +40,10 @@ TNN Cheatsheet
- The Tab-Transformer model introduced in the `"TabTransformer: Tabular Data Modeling Using Contextual Embeddings" <https://arxiv.org/abs/2012.06678>`_ paper.
* - TabNet
- The TabNet model introduced in the `"TabNet: Attentive Interpretable Tabular Learning" <https://arxiv.org/abs/1908.07442>`_ paper.
* - Excelformer
- The ExcelFormerConv Layer introduced in the `"ExcelFormer: A neural network surpassing GBDTs on tabular data" <https://arxiv.org/abs/2301.02819>`_ paper.
* - Trompt
- The TromptConv Layer introduced in the `"Trompt: Towards a Better Deep Neural Network for Tabular Data" <https://arxiv.org/abs/2305.18446>`_ paper.


RTL Cheatsheet
Expand Down
7 changes: 3 additions & 4 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,12 +31,12 @@ Highlight Features:
:maxdepth: 1
:caption: TUTORIALS

tutorial/transforms
tutorial/convolutions
tutorial/gnns
tutorial/tnns
tutorial/rtls
tutorial/llm_methods
tutorial/convolutions
tutorial/transforms


.. toctree::
Expand All @@ -56,8 +56,7 @@ Highlight Features:
:maxdepth: 1
:caption: CHEATSHEETS

cheatsheet/model_cheatsheets
cheatsheet/dataset_cheatsheets
cheatsheet/model_cheatsheet


.. toctree::
Expand Down
6 changes: 3 additions & 3 deletions docs/source/introduce/table_data_handle.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ Data Handling of Tables

A table contains many different columns with many different types. Each column type in Rllm is described by a certain semantic type, i.e., ColType. Rllm supports two basic column types so far:

- :obj:`ColType.CATEGORICAL`:represent categorical or discrete data, such as grade levels in a student dataset and diabetes types in a diabetes dataset.
- :obj:`ColType.NUMERICAL`:represent numerical or continuous data, such as such as temperature in a weather dataset and income in a salary dataset.
- :obj:`ColType.CATEGORICAL`: represent categorical or discrete data, such as grade levels in a student dataset and diabetes types in a diabetes dataset.
- :obj:`ColType.NUMERICAL`: represent numerical or continuous data, such as such as temperature in a weather dataset and income in a salary dataset.

A table in Rllm is described by an instance of :class:`~rllm.data.table_data.TableData` with many default attributes:

Expand Down Expand Up @@ -157,4 +157,4 @@ Rllm also supports a custom dataset, so that you can use Rllm for your own probl
# Set "y" as the target column.
dataset = TableData(df, col_types=col_types, target_col="y")
dataset = TableData(df, col_types=col_types, target_col="y")
3 changes: 3 additions & 0 deletions readthedocs.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
version: 2

sphinx:
configuration: docs/conf.py

build:
os: ubuntu-22.04
tools:
Expand Down
5 changes: 2 additions & 3 deletions rllm/nn/conv/table_conv/ft_transformer_conv.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,8 @@


class FTTransformerConv(torch.nn.Module):
r"""The FT-Transformer backbone in the
`"Revisiting Deep Learning Models for Tabular Data"
<https://arxiv.org/abs/2106.11959>`_ paper.
r"""The FT-Transformer backbone in the `"Revisiting Deep Learning
Models for Tabular Data" <https://arxiv.org/abs/2106.11959>`_ paper.
This module concatenates a learnable CLS token embedding :obj:`x_cls` to
the input tensor :obj:`x` and applies a multi-layer Transformer on the
Expand Down
5 changes: 2 additions & 3 deletions rllm/nn/conv/table_conv/saint_conv.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,9 @@
class SAINTConv(torch.nn.Module):
r"""The SAINTConv Layer introduced in the
`"SAINT: Improved Neural Networks for Tabular Data via Row Attention
and Contrastive Pre-Training"
<https://arxiv.org/abs/2106.01342>`_ paper.
and Contrastive Pre-Training" <https://arxiv.org/abs/2106.01342>`__ paper.
This layer applies two `TransformerEncoder` modules: one for aggregating
This layer applies two :obj:`TransformerEncoder` modules: one for aggregating
information between columns, and another for aggregating information
between samples. This dual attention mechanism allows the model to capture
complex relationships both within the features of a single sample and
Expand Down

0 comments on commit 08a27bc

Please sign in to comment.