Skip to content

Commit

Permalink
Add scripts to verify anchors in CI (#3128) (#3129)
Browse files Browse the repository at this point in the history
* add anchor check to 3.1 branch

* fix 11 anchors

* fix an anchor

* Apply suggestions from code review

* fix 2 anchors
  • Loading branch information
yikeke authored Jul 3, 2020
1 parent d4cfc46 commit b3b4f65
Show file tree
Hide file tree
Showing 13 changed files with 43 additions and 17 deletions.
17 changes: 15 additions & 2 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,23 @@ version: 2
jobs:
lint:
docker:
- image: circleci/ruby:2.4.1-node
- image: circleci/node:lts
working_directory: ~/pingcap/docs
steps:
- checkout

- run:
name: Setup
command: |
mkdir ~/.npm-global
npm config set prefix '~/.npm-global'
echo 'export PATH=~/.npm-global/bin:$PATH' >> $BASH_ENV
echo 'export NODE_PATH=~/.npm-global/lib/node_modules:$NODE_PATH' >> $BASH_ENV
- run:
name: "Install markdownlint"
command: |
sudo npm install -g [email protected]
npm install -g [email protected]
- run:
name: "Lint README"
Expand All @@ -29,6 +37,11 @@ jobs:
command: |
scripts/verify-links.sh
- run:
name: "Check link anchors"
command: |
scripts/verify-link-anchors.sh
build:
docker:
- image: andelf/doc-build:0.1.9
Expand Down
2 changes: 1 addition & 1 deletion auto-random.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ aliases: ['/docs/v3.1/auto-random/','/docs/v3.1/reference/sql/attributes/auto-ra
>
> `AUTO_RANDOM` is still an experimental feature. It is **NOT** recommended that you use this attribute in the production environment. In later TiDB versions, the syntax or semantics of `AUTO_RANDOM` might change.
Before using the `AUTO_RANDOM` attribute, set `allow-auto-random = true` in the `experimental` section of the TiDB configuration file. Refer to [`allow-auto-random`](/tidb-configuration-file.md#allow-auto-random) for details.
Before using the `AUTO_RANDOM` attribute, set `allow-auto-random = true` in the `experimental` section of the TiDB configuration file. Refer to [`allow-auto-random`](/tidb-configuration-file.md#allow-auto-random-new-in-v310) for details.

## User scenario

Expand Down
4 changes: 2 additions & 2 deletions certificate-authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@ First, connect TiDB using the client to configure the login verification. Then,

The user certificate information can be specified by `require subject`, `require issuer`, and `require cipher`, which are used to check the X509 certificate attributes.

+ `require subject`: Specifies the `subject` information of the client certificate when you log in. With this option specified, you do not need to configure `require ssl` or x509. The information to be specified is consistent with the entered `subject` information in [Generate client keys and certificates](#generate-client-keys-and-certificates).
+ `require subject`: Specifies the `subject` information of the client certificate when you log in. With this option specified, you do not need to configure `require ssl` or x509. The information to be specified is consistent with the entered `subject` information in [Generate client keys and certificates](#generate-client-key-and-certificate).

To get this option, execute the following command:

Expand Down Expand Up @@ -483,4 +483,4 @@ Also replace the old CA certificate with the combined certificate so that the cl
sudo openssl x509 -req -in server-req.new.pem -days 365000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.new.pem
```

3. Configure the TiDB server to use the new server key and certificate. See [Configure TiDB server](#configure-tidb-server) for details.
3. Configure the TiDB server to use the new server key and certificate. See [Configure TiDB server](#configure-tidb-and-the-client-to-use-certificates) for details.
2 changes: 1 addition & 1 deletion glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ aliases: ['/docs/v3.1/glossary/']

ACID refers to the four key properties of a transaction: atomicity, consistency, isolation, and durability. Each of these properties is described below.

- **Atomicity** means that either all the changes of an operation are performed, or none of them are. TiDB ensures the atomicity of the [Region](#region) that stores the Primary Key to achieve the atomicity of transactions.
- **Atomicity** means that either all the changes of an operation are performed, or none of them are. TiDB ensures the atomicity of the [Region](#regionpeerraft-group) that stores the Primary Key to achieve the atomicity of transactions.

- **Consistency** means that transactions always bring the database from one consistent state to another. In TiDB, data consistency is ensured before writing data to the memory.

Expand Down
2 changes: 1 addition & 1 deletion optimistic-transaction.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ However, TiDB transactions also have the following disadvantages:
* In need of a centralized version manager
* OOM (out of memory) when extensive data is written in the memory

To avoid potential problems in application, refer to [transaction sizes](/transaction-overview.md#transaction-size) to see more details.
To avoid potential problems in application, refer to [transaction sizes](/transaction-overview.md#transaction-sizes) to see more details.

## Transaction retries

Expand Down
2 changes: 1 addition & 1 deletion pessimistic-transaction.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ To disable the pessimistic transaction mode, modify the configuration file and a

## Behaviors

Pessimistic transactions in TiDB behave similarly to those in MySQL. See the minor differences in [Difference with MySQL InnoDB](#difference-with-mysql-innoDB).
Pessimistic transactions in TiDB behave similarly to those in MySQL. See the minor differences in [Difference with MySQL InnoDB](#difference-with-mysql-innodb).

- When you perform the `SELECT FOR UPDATE` statement, transactions read the last committed data and apply a pessimistic lock on the data being read.

Expand Down
14 changes: 14 additions & 0 deletions scripts/verify-link-anchors.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
#!/bin/bash
#
# In addition to verify-links.sh, this script additionally check anchors.
#
# See https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally if you meet permission problems when executing npm install.

ROOT=$(unset CDPATH && cd $(dirname "${BASH_SOURCE[0]}")/.. && pwd)
cd $ROOT

npm install -g remark-cli remark-lint breeswish/remark-lint-pingcap-docs-anchor

echo "info: checking links anchors under $ROOT directory..."

remark --ignore-path .gitignore -u lint -u remark-lint-pingcap-docs-anchor . --frail --quiet
7 changes: 3 additions & 4 deletions scripts/verify-links.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,12 @@
# - When a file was moved, all other references are required to be updated for now, even if alias are given
# - This is recommended because of less redirects and better anchors support.
#
# See https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally if you meet permission problems when executing npm install.

ROOT=$(unset CDPATH && cd $(dirname "${BASH_SOURCE[0]}")/.. && pwd)
cd $ROOT

if ! which markdown-link-check &>/dev/null; then
sudo npm install -g [email protected]
fi
npm install -g [email protected]

VERBOSE=${VERBOSE:-}
CONFIG_TMP=$(mktemp)
Expand Down Expand Up @@ -50,7 +49,7 @@ fi
while read -r tasks; do
for task in $tasks; do
(
output=$(markdown-link-check --color --config "$CONFIG_TMP" "$task" -q)
output=$(markdown-link-check --config "$CONFIG_TMP" "$task" -q)
if [ $? -ne 0 ]; then
printf "$output" >> $ERROR_REPORT
fi
Expand Down
2 changes: 1 addition & 1 deletion sql-statements/sql-statement-recover-table.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ When you use `RECOVER TABLE` in the upstream TiDB during TiDB Binlog replication

+ Latency occurs during replication between upstream and downstream databases. An error instance: `snapshot is older than GC safe point 2019-07-10 13:45:57 +0800 CST`.

For the above three situations, you can resume data replication from TiDB Binlog with a [full import of the deleted table](/ecosystem-tool-user-guide.md#full-backup-and-restore-of-tidb-cluster-data-1).
For the above three situations, you can resume data replication from TiDB Binlog with a [full import of the deleted table](/ecosystem-tool-user-guide.md#backup-and-restore).

## Examples

Expand Down
2 changes: 1 addition & 1 deletion tidb-binlog/handle-tidb-binlog-errors.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,4 @@ Solution: Clean up the disk space and then restart Pump.

Cause: When Pump is started, it notifies all Drainer nodes that are in the `online` state. If it fails to notify Drainer, this error log is printed.

Solution: Use the [binlogctl tool](/tidb-binlog/maintain-tidb-binlog-cluster.md#binlog-guide) to check whether each Drainer node is normal or not. This is to ensure that all Drainer nodes that are in the `online` state are working normally. If the state of a Drainer node is not consistent with its actual working status, use the binlogctl tool to change its state and then restart Pump.
Solution: Use the [binlogctl tool](/tidb-binlog/maintain-tidb-binlog-cluster.md#binlogctl-guide) to check whether each Drainer node is normal or not. This is to ensure that all Drainer nodes that are in the `online` state are working normally. If the state of a Drainer node is not consistent with its actual working status, use the binlogctl tool to change its state and then restart Pump.
2 changes: 1 addition & 1 deletion tidb-lightning/deploy-tidb-lightning.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ If the data source consists of CSV files, see [CSV support](/tidb-lightning/migr

This section describes two deployment methods of TiDB Lightning:

- [Deploy TiDB Lightning using TiDB Ansible](#deploy-tidb-lightning-using-ansible)
- [Deploy TiDB Lightning using TiDB Ansible](#deploy-tidb-lightning-using-tidb-ansible)
- [Deploy TiDB Lightning manually](#deploy-tidb-lightning-manually)

### Deploy TiDB Lightning using TiDB Ansible
Expand Down
2 changes: 1 addition & 1 deletion tidb-lightning/tidb-lightning-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ If `tikv-importer` needs to be restarted:
4. Start `tikv-importer`.
5. Start `tidb-lightning` *and wait until the program fails with CHECKSUM error, if any*.
* Restarting `tikv-importer` would destroy all engine files still being written, but `tidb-lightning` did not know about it. As of v3.0 the simplest way is to let `tidb-lightning` go on and retry.
6. [Destroy the failed tables and checkpoints](/troubleshoot-tidb-lightning.md#checkpoint-for-has-invalid-status)
6. [Destroy the failed tables and checkpoints](/troubleshoot-tidb-lightning.md#checkpoint-for--has-invalid-status-error-code)
7. Start `tidb-lightning` again.

## How to ensure the integrity of the imported data?
Expand Down
2 changes: 1 addition & 1 deletion tune-operating-system.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ cpufreq is a module that dynamically adjusts the CPU frequency. It supports five

### NUMA CPU binding

To avoid accessing memory across Non-Uniform Memory Access (NUMA) nodes as much as possible, you can bind a thread/process to certain CPU cores by setting the CPU affinity of the thread. For ordinary programs, you can use the `numactl` command for the CPU binding. For detailed usage, see the Linux manual pages. For network interface card (NIC) interrupts, see [tune network](#tune-network).
To avoid accessing memory across Non-Uniform Memory Access (NUMA) nodes as much as possible, you can bind a thread/process to certain CPU cores by setting the CPU affinity of the thread. For ordinary programs, you can use the `numactl` command for the CPU binding. For detailed usage, see the Linux manual pages. For network interface card (NIC) interrupts, see [tune network](#network-tuning).

### Memory—transparent huge page (THP)

Expand Down

0 comments on commit b3b4f65

Please sign in to comment.