diff --git a/.github/ISSUE_TEMPLATE/BUG_REPORT.md b/.github/ISSUE_TEMPLATE/BUG_REPORT.md index 47b9079aae8..f1be27bf62f 100644 --- a/.github/ISSUE_TEMPLATE/BUG_REPORT.md +++ b/.github/ISSUE_TEMPLATE/BUG_REPORT.md @@ -25,5 +25,5 @@ Please answer the following questions before submitting your issue. Thanks! - [ ] I searched for existing GitHub issues -- [ ] I updated Nebula Graph to most recent version +- [ ] I updated NebulaGraph to most recent version - [ ] I included all the necessary information above diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index a3c7122b03b..81f0f3171e7 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -1,4 +1,4 @@ - + ### What is changed, added or deleted? (Required) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 8f8c4a8218d..feb8152296d 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,6 +1,6 @@ # Contribute to Documentation -Contributing to the **Nebula Graph** documentation can be a rewarding experience. We welcome your participation to help make the documentation better! +Contributing to the **NebulaGraph** documentation can be a rewarding experience. We welcome your participation to help make the documentation better! ## What to Contribute diff --git a/README.md b/README.md index 3766eb9384b..3b33ce38d67 100644 --- a/README.md +++ b/README.md @@ -1,8 +1,8 @@ -# Nebula Graph documentation +# NebulaGraph documentation - [English](https://docs.nebula-graph.io) - [中文](https://docs.nebula-graph.com.cn/) ## Contributing -If you have any questions on our documentation, feel free to raise an [Issue](https://github.com/vesoft-inc/nebula-docs/issues) or directly create a [Pull Request](https://github.com/vesoft-inc/nebula-docs/pulls) to help fix or update it. See Nebula Graph [CONTRIBUTING](CONTRIBUTING.md) guide to get started. +If you have any questions on our documentation, feel free to raise an [Issue](https://github.com/vesoft-inc/nebula-docs/issues) or directly create a [Pull Request](https://github.com/vesoft-inc/nebula-docs/pulls) to help fix or update it. See NebulaGraph [CONTRIBUTING](CONTRIBUTING.md) guide to get started. diff --git a/docs-2.0/1.introduction/0-0-graph.md b/docs-2.0/1.introduction/0-0-graph.md index 5d5c27bbc6d..c6767e07c93 100644 --- a/docs-2.0/1.introduction/0-0-graph.md +++ b/docs-2.0/1.introduction/0-0-graph.md @@ -6,7 +6,7 @@ Graphs are one of the main areas of research in computer science. Graphs can eff ## What are graphs? -Graphs are everywhere. When hearing the word graph, many people think of bar charts or line charts, because sometimes we call them graphs, which show the connections between two or more data systems. The simplest example is the following picture, which shows the number of Nebula Graph GitHub repository stars over time. +Graphs are everywhere. When hearing the word graph, many people think of bar charts or line charts, because sometimes we call them graphs, which show the connections between two or more data systems. The simplest example is the following picture, which shows the number of NebulaGraph GitHub repository stars over time. ![image](https://user-images.githubusercontent.com/42762957/91426247-d3861000-e88e-11ea-8e17-e3d7d7069bd1.png "This is not the graph talked about in this book") diff --git a/docs-2.0/1.introduction/0-1-graph-database.md b/docs-2.0/1.introduction/0-1-graph-database.md index 19e237ecfcd..c041fa18977 100644 --- a/docs-2.0/1.introduction/0-1-graph-database.md +++ b/docs-2.0/1.introduction/0-1-graph-database.md @@ -148,7 +148,7 @@ Cypher has inspired a series of graph query languages, including: [^GSQL]: https://docs.tigergraph.com/dev/gsql-ref -2019, Nebula Graph released Nebula Graph Query Language (nGQL) based on openCypher. +2019, NebulaGraph released NebulaGraph Query Language (nGQL) based on openCypher. ![Image](https://docs-cdn.nebula-graph.com.cn/books/images/langhis.jpg "图语言的历史") @@ -237,6 +237,6 @@ Oracle Graph[^Oracle] is a product of the relational database giant Oracle in th [^Oracle]: https://www.oracle.com/database/graph/ -#### Nebula Graph, a new generation of open-source distributed graph databases +#### NebulaGraph, a new generation of open-source distributed graph databases -In the following topics, we will formally introduce Nebula Graph, a new generation of open-source distributed graph databases. +In the following topics, we will formally introduce NebulaGraph, a new generation of open-source distributed graph databases. diff --git a/docs-2.0/1.introduction/0-2.relates.md b/docs-2.0/1.introduction/0-2.relates.md index bc3e1da9ca0..8ba1c72ab7c 100644 --- a/docs-2.0/1.introduction/0-2.relates.md +++ b/docs-2.0/1.introduction/0-2.relates.md @@ -55,7 +55,7 @@ Technically speaking, as a semi-structured unit of information, a document in a #### Graph Store -The last class of NoSQL databases is graph databases. Nebula Graph, is also a graph database. Although graph databases are also NoSQL databases, graph databases are fundamentally different from the above-mentioned NoSQL databases. Graph databases store data in the form of vertices, edges, and properties. Its advantages include high flexibility, support for complex graph algorithms, and can be used to build complex relational graphs. We will discuss graph databases in detail in the subsequent topics. But in this topic, you just need to know that a graph database is a NoSQL type of database. Common graph databases include Nebula Graph, Neo4j, OrientDB, etc. +The last class of NoSQL databases is graph databases. NebulaGraph, is also a graph database. Although graph databases are also NoSQL databases, graph databases are fundamentally different from the above-mentioned NoSQL databases. Graph databases store data in the form of vertices, edges, and properties. Its advantages include high flexibility, support for complex graph algorithms, and can be used to build complex relational graphs. We will discuss graph databases in detail in the subsequent topics. But in this topic, you just need to know that a graph database is a NoSQL type of database. Common graph databases include NebulaGraph, Neo4j, OrientDB, etc. ## Graph-related technologies diff --git a/docs-2.0/1.introduction/1.what-is-nebula-graph.md b/docs-2.0/1.introduction/1.what-is-nebula-graph.md index a327c98fdd6..ed9c0ad4f9e 100644 --- a/docs-2.0/1.introduction/1.what-is-nebula-graph.md +++ b/docs-2.0/1.introduction/1.what-is-nebula-graph.md @@ -1,82 +1,82 @@ -# What is Nebula Graph +# What is NebulaGraph -Nebula Graph is an open-source, distributed, easily scalable, and native graph database. It is capable of hosting graphs with hundreds of billions of vertices and trillions of edges, and serving queries with millisecond-latency. +NebulaGraph is an open-source, distributed, easily scalable, and native graph database. It is capable of hosting graphs with hundreds of billions of vertices and trillions of edges, and serving queries with millisecond-latency. -![Nebula Graph birdview](https://docs-cdn.nebula-graph.com.cn/figures/nebula-graph-birdview-3.0.0.png) +![NebulaGraph birdview](https://docs-cdn.nebula-graph.com.cn/figures/nebula-graph-birdview-3.0.0.png) ## What is a graph database -A graph database, such as Nebula Graph, is a database that specializes in storing vast graph networks and retrieving information from them. It efficiently stores data as vertices (nodes) and edges (relationships) in labeled property graphs. Properties can be attached to both vertices and edges. Each vertex can have one or multiple tags (labels). +A graph database, such as NebulaGraph, is a database that specializes in storing vast graph networks and retrieving information from them. It efficiently stores data as vertices (nodes) and edges (relationships) in labeled property graphs. Properties can be attached to both vertices and edges. Each vertex can have one or multiple tags (labels). ![What is a graph database](https://docs-cdn.nebula-graph.com.cn/docs-2.0/1.introduction/what-is-a-graph-database.png "What is a graph database") Graph databases are well suited for storing most kinds of data models abstracted from reality. Things are connected in almost all fields in the world. Modeling systems like relational databases extract the relationships between entities and squeeze them into table columns alone, with their types and properties stored in other columns or even other tables. This makes data management time-consuming and cost-ineffective. -Nebula Graph, as a typical native graph database, allows you to store the rich relationships as edges with edge types and properties directly attached to them. +NebulaGraph, as a typical native graph database, allows you to store the rich relationships as edges with edge types and properties directly attached to them. -## Advantages of Nebula Graph +## Advantages of NebulaGraph ### Open source -Nebula Graph is open under the Apache 2.0 License. More and more people such as database developers, data scientists, security experts, and algorithm engineers are participating in the designing and development of Nebula Graph. To join the opening of source code and ideas, surf the [Nebula Graph GitHub page](https://github.com/vesoft-inc/nebula-graph). +NebulaGraph is open under the Apache 2.0 License. More and more people such as database developers, data scientists, security experts, and algorithm engineers are participating in the designing and development of NebulaGraph. To join the opening of source code and ideas, surf the [NebulaGraph GitHub page](https://github.com/vesoft-inc/nebula-graph). ### Outstanding performance -Written in C++ and born for graphs, Nebula Graph handles graph queries in milliseconds. Among most databases, Nebula Graph shows superior performance in providing graph data services. The larger the data size, the greater the superiority of Nebula Graph. For more information, see [Nebula Graph benchmarking](https://discuss.nebula-graph.io/t/nebula-graph-1-0-benchmark-report/581). +Written in C++ and born for graphs, NebulaGraph handles graph queries in milliseconds. Among most databases, NebulaGraph shows superior performance in providing graph data services. The larger the data size, the greater the superiority of NebulaGraph. For more information, see [NebulaGraph benchmarking](https://discuss.nebula-graph.io/t/nebula-graph-1-0-benchmark-report/581). ### High scalability -Nebula Graph is designed in a shared-nothing architecture and supports scaling in and out without interrupting the database service. +NebulaGraph is designed in a shared-nothing architecture and supports scaling in and out without interrupting the database service. ### Developer friendly -Nebula Graph supports clients in popular programming languages like Java, Python, C++, and Go, and more are under development. For more information, see Nebula Graph [clients](../20.appendix/6.eco-tool-version.md). +NebulaGraph supports clients in popular programming languages like Java, Python, C++, and Go, and more are under development. For more information, see NebulaGraph [clients](../20.appendix/6.eco-tool-version.md). ### Reliable access control -Nebula Graph supports strict role-based access control and external authentication servers such as LDAP (Lightweight Directory Access Protocol) servers to enhance data security. For more information, see [Authentication and authorization](../7.data-security/1.authentication/1.authentication.md). +NebulaGraph supports strict role-based access control and external authentication servers such as LDAP (Lightweight Directory Access Protocol) servers to enhance data security. For more information, see [Authentication and authorization](../7.data-security/1.authentication/1.authentication.md). ### Diversified ecosystem -More and more native tools of Nebula Graph have been released, such as [Nebula Studio](https://github.com/vesoft-inc/nebula-web-docker), [Nebula Console](https://github.com/vesoft-inc/nebula-console), and [Nebula Exchange](https://github.com/vesoft-inc/nebula-exchange). For more ecosystem tools, see [Ecosystem tools overview](../20.appendix/6.eco-tool-version.md). +More and more native tools of NebulaGraph have been released, such as [Nebula Studio](https://github.com/vesoft-inc/nebula-web-docker), [Nebula Console](https://github.com/vesoft-inc/nebula-console), and [Nebula Exchange](https://github.com/vesoft-inc/nebula-exchange). For more ecosystem tools, see [Ecosystem tools overview](../20.appendix/6.eco-tool-version.md). -Besides, Nebula Graph has the ability to be integrated with many cutting-edge technologies, such as Spark, Flink, and HBase, for the purpose of mutual strengthening in a world of increasing challenges and chances. +Besides, NebulaGraph has the ability to be integrated with many cutting-edge technologies, such as Spark, Flink, and HBase, for the purpose of mutual strengthening in a world of increasing challenges and chances. ### OpenCypher-compatible query language -The native Nebula Graph Query Language, also known as nGQL, is a declarative, openCypher-compatible textual query language. It is easy to understand and easy to use. For more information, see [nGQL guide](../3.ngql-guide/1.nGQL-overview/1.overview.md). +The native NebulaGraph Query Language, also known as nGQL, is a declarative, openCypher-compatible textual query language. It is easy to understand and easy to use. For more information, see [nGQL guide](../3.ngql-guide/1.nGQL-overview/1.overview.md). ### Future-oriented hardware with balanced reading and writing -Solid-state drives have extremely high performance and [they are getting cheaper](https://blocksandfiles.com/wp-content/uploads/2021/01/Wikibon-SSD-less-than-HDD-in-2026.jpg). Nebula Graph is a product based on SSD. Compared with products based on HDD and large memory, it is more suitable for future hardware trends and easier to achieve balanced reading and writing. +Solid-state drives have extremely high performance and [they are getting cheaper](https://blocksandfiles.com/wp-content/uploads/2021/01/Wikibon-SSD-less-than-HDD-in-2026.jpg). NebulaGraph is a product based on SSD. Compared with products based on HDD and large memory, it is more suitable for future hardware trends and easier to achieve balanced reading and writing. ### Easy data modeling and high flexibility -You can easily model the connected data into Nebula Graph for your business without forcing them into a structure such as a relational table, and properties can be added, updated, and deleted freely. For more information, see [Data modeling](2.data-model.md). +You can easily model the connected data into NebulaGraph for your business without forcing them into a structure such as a relational table, and properties can be added, updated, and deleted freely. For more information, see [Data modeling](2.data-model.md). ### High popularity -Nebula Graph is being used by tech leaders such as Tencent, Vivo, Meituan, and JD Digits. For more information, visit the [Nebula Graph official website](https://nebula-graph.io/). +NebulaGraph is being used by tech leaders such as Tencent, Vivo, Meituan, and JD Digits. For more information, visit the [NebulaGraph official website](https://nebula-graph.io/). ## Use cases -Nebula Graph can be used to support various graph-based scenarios. To spare the time spent on pushing the kinds of data mentioned in this section into relational databases and on bothering with join queries, use Nebula Graph. +NebulaGraph can be used to support various graph-based scenarios. To spare the time spent on pushing the kinds of data mentioned in this section into relational databases and on bothering with join queries, use NebulaGraph. ### Fraud detection -Financial institutions have to traverse countless transactions to piece together potential crimes and understand how combinations of transactions and devices might be related to a single fraud scheme. This kind of scenario can be modeled in graphs, and with the help of Nebula Graph, fraud rings and other sophisticated scams can be easily detected. +Financial institutions have to traverse countless transactions to piece together potential crimes and understand how combinations of transactions and devices might be related to a single fraud scheme. This kind of scenario can be modeled in graphs, and with the help of NebulaGraph, fraud rings and other sophisticated scams can be easily detected. ### Real-time recommendation -Nebula Graph offers the ability to instantly process the real-time information produced by a visitor and make accurate recommendations on articles, videos, products, and services. +NebulaGraph offers the ability to instantly process the real-time information produced by a visitor and make accurate recommendations on articles, videos, products, and services. ### Intelligent question-answer system -Natural languages can be transformed into knowledge graphs and stored in Nebula Graph. A question organized in a natural language can be resolved by a semantic parser in an intelligent question-answer system and re-organized. Then, possible answers to the question can be retrieved from the knowledge graph and provided to the one who asked the question. +Natural languages can be transformed into knowledge graphs and stored in NebulaGraph. A question organized in a natural language can be resolved by a semantic parser in an intelligent question-answer system and re-organized. Then, possible answers to the question can be retrieved from the knowledge graph and provided to the one who asked the question. ### Social networking -Information on people and their relationships is typical graph data. Nebula Graph can easily handle the social networking information of billions of people and trillions of relationships, and provide lightning-fast queries for friend recommendations and job promotions in the case of massive concurrency. +Information on people and their relationships is typical graph data. NebulaGraph can easily handle the social networking information of billions of people and trillions of relationships, and provide lightning-fast queries for friend recommendations and job promotions in the case of massive concurrency. ## Related links diff --git a/docs-2.0/1.introduction/2.data-model.md b/docs-2.0/1.introduction/2.data-model.md index 59727ac5d06..6ac70153ba2 100644 --- a/docs-2.0/1.introduction/2.data-model.md +++ b/docs-2.0/1.introduction/2.data-model.md @@ -1,21 +1,21 @@ # Data modeling -A data model is a model that organizes data and specifies how they are related to one another. This topic describes the Nebula Graph data model and provides suggestions for data modeling with Nebula Graph. +A data model is a model that organizes data and specifies how they are related to one another. This topic describes the Nebula Graph data model and provides suggestions for data modeling with NebulaGraph. ## Data structures -Nebula Graph data model uses six data structures to store data. They are graph spaces, vertices, edges, tags, edge types and properties. +NebulaGraph data model uses six data structures to store data. They are graph spaces, vertices, edges, tags, edge types and properties. - **Graph spaces**: Graph spaces are used to isolate data from different teams or programs. Data stored in different graph spaces are securely isolated. Storage replications, privileges, and partitions can be assigned. - **Vertices**: Vertices are used to store entities. -- In Nebula Graph, vertices are identified with vertex identifiers (i.e. `VID`). The `VID` must be unique in the same graph space. VID should be int64, or fixed_string(N). +- In NebulaGraph, vertices are identified with vertex identifiers (i.e. `VID`). The `VID` must be unique in the same graph space. VID should be int64, or fixed_string(N). - A vertex has zero to multiple tags. !!! compatibility - In Nebula Graph 2.x a vertex must have at least one tag. And in Nebula Graph {{nebula.release}}, a tag is not required for a vertex. + In NebulaGraph 2.x a vertex must have at least one tag. And in NebulaGraph {{nebula.release}}, a tag is not required for a vertex. - **Edges**: Edges are used to connect vertices. An edge is a connection or behavior between two vertices. - There can be multiple edges between two vertices. @@ -36,7 +36,7 @@ Nebula Graph data model uses six data structures to store data. They are graph s ## Directed property graph -Nebula Graph stores data in directed property graphs. A directed property graph has a set of vertices connected by directed edges. Both vertices and edges can have properties. A directed property graph is represented as: +NebulaGraph stores data in directed property graphs. A directed property graph has a set of vertices connected by directed edges. Both vertices and edges can have properties. A directed property graph is represented as: **G = < V, E, PV, PE >** @@ -56,10 +56,10 @@ The following table is an example of the structure of the basketball player data !!! Note - Nebula Graph supports only directed edges. + NebulaGraph supports only directed edges. !!! compatibility - Nebula Graph {{ nebula.release }} allows dangling edges. Therefore, when adding or deleting, you need to ensure the corresponding source vertex and destination vertex of an edge exist. For details, see [INSERT VERTEX](../3.ngql-guide/12.vertex-statements/1.insert-vertex.md), [DELETE VERTEX](../3.ngql-guide/12.vertex-statements/4.delete-vertex.md), [INSERT EDGE](../3.ngql-guide/13.edge-statements/1.insert-edge.md), and [DELETE EDGE](../3.ngql-guide/13.edge-statements/4.delete-edge.md). + NebulaGraph {{ nebula.release }} allows dangling edges. Therefore, when adding or deleting, you need to ensure the corresponding source vertex and destination vertex of an edge exist. For details, see [INSERT VERTEX](../3.ngql-guide/12.vertex-statements/1.insert-vertex.md), [DELETE VERTEX](../3.ngql-guide/12.vertex-statements/4.delete-vertex.md), [INSERT EDGE](../3.ngql-guide/13.edge-statements/1.insert-edge.md), and [DELETE EDGE](../3.ngql-guide/13.edge-statements/4.delete-edge.md). The MERGE statement in openCypher is not supported. diff --git a/docs-2.0/1.introduction/3.nebula-graph-architecture/1.architecture-overview.md b/docs-2.0/1.introduction/3.nebula-graph-architecture/1.architecture-overview.md index 8e7d1109cab..e45685d0295 100644 --- a/docs-2.0/1.introduction/3.nebula-graph-architecture/1.architecture-overview.md +++ b/docs-2.0/1.introduction/3.nebula-graph-architecture/1.architecture-overview.md @@ -1,22 +1,22 @@ # Architecture overview -Nebula Graph consists of three services: the Graph Service, the Storage Service, and the Meta Service. It applies the separation of storage and computing architecture. +NebulaGraph consists of three services: the Graph Service, the Storage Service, and the Meta Service. It applies the separation of storage and computing architecture. -Each service has its executable binaries and processes launched from the binaries. Users can deploy a Nebula Graph cluster on a single machine or multiple machines using these binaries. +Each service has its executable binaries and processes launched from the binaries. Users can deploy a NebulaGraph cluster on a single machine or multiple machines using these binaries. -The following figure shows the architecture of a typical Nebula Graph cluster. +The following figure shows the architecture of a typical NebulaGraph cluster. -![Nebula Graph architecture](https://docs-cdn.nebula-graph.com.cn/figures/nebula-graph-architecture_3.png "Nebula Graph architecture") +![NebulaGraph architecture](https://docs-cdn.nebula-graph.com.cn/figures/nebula-graph-architecture_3.png "NebulaGraph architecture") ## The Meta Service -The Meta Service in the Nebula Graph architecture is run by the nebula-metad processes. It is responsible for metadata management, such as schema operations, cluster administration, and user privilege management. +The Meta Service in the NebulaGraph architecture is run by the nebula-metad processes. It is responsible for metadata management, such as schema operations, cluster administration, and user privilege management. For details on the Meta Service, see [Meta Service](2.meta-service.md). ## The Graph Service and the Storage Service -Nebula Graph applies the separation of storage and computing architecture. The Graph Service is responsible for querying. The Storage Service is responsible for storage. They are run by different processes, i.e., nebula-graphd and nebula-storaged. The benefits of the separation of storage and computing architecture are as follows: +NebulaGraph applies the separation of storage and computing architecture. The Graph Service is responsible for querying. The Storage Service is responsible for storage. They are run by different processes, i.e., nebula-graphd and nebula-storaged. The benefits of the separation of storage and computing architecture are as follows: * Great scalability @@ -30,7 +30,7 @@ Nebula Graph applies the separation of storage and computing architecture. The G The separation of storage and computing architecture provides a higher resource utilization rate, and it enables clients to manage the cost flexibly according to business demands. - + * Open to more possibilities diff --git a/docs-2.0/1.introduction/3.nebula-graph-architecture/2.meta-service.md b/docs-2.0/1.introduction/3.nebula-graph-architecture/2.meta-service.md index 43dc21b5894..8cfce5a8179 100644 --- a/docs-2.0/1.introduction/3.nebula-graph-architecture/2.meta-service.md +++ b/docs-2.0/1.introduction/3.nebula-graph-architecture/2.meta-service.md @@ -15,7 +15,7 @@ The Meta Service is run by nebula-metad processes. Users can deploy nebula-metad All the nebula-metad processes form a Raft-based cluster, with one process as the leader and the others as the followers. -The leader is elected by the majorities and only the leader can provide service to the clients or other components of Nebula Graph. The followers will be run in a standby way and each has a data replication of the leader. Once the leader fails, one of the followers will be elected as the new leader. +The leader is elected by the majorities and only the leader can provide service to the clients or other components of NebulaGraph. The followers will be run in a standby way and each has a data replication of the leader. Once the leader fails, one of the followers will be elected as the new leader. !!! Note @@ -27,7 +27,7 @@ The leader is elected by the majorities and only the leader can provide service The Meta Service stores the information of user accounts and the privileges granted to the accounts. When the clients send queries to the Meta Service through an account, the Meta Service checks the account information and whether the account has the right privileges to execute the queries or not. -For more information on Nebula Graph access control, see [Authentication](../../7.data-security/1.authentication/1.authentication.md). +For more information on NebulaGraph access control, see [Authentication](../../7.data-security/1.authentication/1.authentication.md). ### Manages partitions @@ -35,15 +35,15 @@ The Meta Service stores and manages the locations of the storage partitions and ### Manages graph spaces -Nebula Graph supports multiple graph spaces. Data stored in different graph spaces are securely isolated. The Meta Service stores the metadata of all graph spaces and tracks the changes of them, such as adding or dropping a graph space. +NebulaGraph supports multiple graph spaces. Data stored in different graph spaces are securely isolated. The Meta Service stores the metadata of all graph spaces and tracks the changes of them, such as adding or dropping a graph space. ### Manages schema information -Nebula Graph is a strong-typed graph database. Its schema contains tags (i.e., the vertex types), edge types, tag properties, and edge type properties. +NebulaGraph is a strong-typed graph database. Its schema contains tags (i.e., the vertex types), edge types, tag properties, and edge type properties. The Meta Service stores the schema information. Besides, it performs the addition, modification, and deletion of the schema, and logs the versions of them. -For more information on Nebula Graph schema, see [Data model](../2.data-model.md). +For more information on NebulaGraph schema, see [Data model](../2.data-model.md). ### Manages TTL information diff --git a/docs-2.0/1.introduction/3.nebula-graph-architecture/4.storage-service.md b/docs-2.0/1.introduction/3.nebula-graph-architecture/4.storage-service.md index c4d6bc7940c..726293d6c02 100644 --- a/docs-2.0/1.introduction/3.nebula-graph-architecture/4.storage-service.md +++ b/docs-2.0/1.introduction/3.nebula-graph-architecture/4.storage-service.md @@ -1,6 +1,6 @@ # Storage Service -The persistent data of Nebula Graph have two parts. One is the [Meta Service](2.meta-service.md) that stores the meta-related data. +The persistent data of NebulaGraph have two parts. One is the [Meta Service](2.meta-service.md) that stores the meta-related data. The other is the Storage Service that stores the data, which is run by the nebula-storaged process. This topic will describe the architecture of the Storage Service. @@ -52,37 +52,37 @@ The following will describe some features of the Storage Service based on the ab ## KVStore -Nebula Graph develops and customizes its built-in KVStore for the following reasons. +NebulaGraph develops and customizes its built-in KVStore for the following reasons. - It is a high-performance KVStore. -- It is provided as a (kv) library and can be easily developed for the filter pushdown purpose. As a strong-typed database, how to provide Schema during pushdown is the key to efficiency for Nebula Graph. +- It is provided as a (kv) library and can be easily developed for the filter pushdown purpose. As a strong-typed database, how to provide Schema during pushdown is the key to efficiency for NebulaGraph. - It has strong data consistency. -Therefore, Nebula Graph develops its own KVStore with RocksDB as the local storage engine. The advantages are as follows. +Therefore, NebulaGraph develops its own KVStore with RocksDB as the local storage engine. The advantages are as follows. -- For multiple local hard disks, Nebula Graph can make full use of its concurrent capacities through deploying multiple data directories. +- For multiple local hard disks, NebulaGraph can make full use of its concurrent capacities through deploying multiple data directories. - The Meta Service manages all the Storage servers. All the partition distribution data and current machine status can be found in the meta service. Accordingly, users can execute a manual load balancing plan in meta service. !!! Note - Nebula Graph does not support auto load balancing because auto data transfer will affect online business. + NebulaGraph does not support auto load balancing because auto data transfer will affect online business. -- Nebula Graph provides its own WAL mode so one can customize the WAL. Each partition owns its WAL. +- NebulaGraph provides its own WAL mode so one can customize the WAL. Each partition owns its WAL. -- One Nebula Graph KVStore cluster supports multiple graph spaces, and each graph space has its own partition number and replica copies. Different graph spaces are isolated physically from each other in the same cluster. +- One NebulaGraph KVStore cluster supports multiple graph spaces, and each graph space has its own partition number and replica copies. Different graph spaces are isolated physically from each other in the same cluster. ## Data storage structure -Graphs consist of vertices and edges. Nebula Graph uses key-value pairs to store vertices, edges, and their properties. Vertices and edges are stored in keys and their properties are stored in values. Such structure enables efficient property filtering. +Graphs consist of vertices and edges. NebulaGraph uses key-value pairs to store vertices, edges, and their properties. Vertices and edges are stored in keys and their properties are stored in values. Such structure enables efficient property filtering. - The storage structure of vertices - Different from Nebula Graph version 2.x, version 3.x added a new key for each vertex. Compared to the old key that still exists, the new key has no `TagID` field and no value. Vertices in Nebula Graph can now live without tags owing to the new key. + Different from NebulaGraph version 2.x, version 3.x added a new key for each vertex. Compared to the old key that still exists, the new key has no `TagID` field and no value. Vertices in NebulaGraph can now live without tags owing to the new key. - ![The vertex structure of Nebula Graph](https://docs-cdn.nebula-graph.com.cn/figures/3.0-vertex-key.png) + ![The vertex structure of NebulaGraph](https://docs-cdn.nebula-graph.com.cn/figures/3.0-vertex-key.png) |Field|Description| |:---|:---| @@ -94,7 +94,7 @@ Graphs consist of vertices and edges. Nebula Graph uses key-value pairs to store - The storage structure of edges - ![The edge structure of Nebula Graph](https://docs-cdn.nebula-graph.com.cn/figures/3.0-edge-key.png) + ![The edge structure of NebulaGraph](https://docs-cdn.nebula-graph.com.cn/figures/3.0-edge-key.png) |Field|Description| |:---|:---| @@ -108,19 +108,19 @@ Graphs consist of vertices and edges. Nebula Graph uses key-value pairs to store ### Property descriptions -Nebula Graph uses strong-typed Schema. +NebulaGraph uses strong-typed Schema. -Nebula Graph will store the properties of vertex and edges in order after encoding them. Since the length of properties is fixed, queries can be made in no time according to offset. Before decoding, Nebula Graph needs to get (and cache) the schema information in the Meta Service. In addition, when encoding properties, Nebula Graph will add the corresponding schema version to support online schema change. +NebulaGraph will store the properties of vertex and edges in order after encoding them. Since the length of properties is fixed, queries can be made in no time according to offset. Before decoding, NebulaGraph needs to get (and cache) the schema information in the Meta Service. In addition, when encoding properties, NebulaGraph will add the corresponding schema version to support online schema change. ## Data partitioning -Since in an ultra-large-scale relational network, vertices can be as many as tens to hundreds of billions, and edges are even more than trillions. Even if only vertices and edges are stored, the storage capacity of both exceeds that of ordinary servers. Therefore, Nebula Graph uses hash to shard the graph elements and store them in different partitions. +Since in an ultra-large-scale relational network, vertices can be as many as tens to hundreds of billions, and edges are even more than trillions. Even if only vertices and edges are stored, the storage capacity of both exceeds that of ordinary servers. Therefore, NebulaGraph uses hash to shard the graph elements and store them in different partitions. ![data partitioning](https://www-cdn.nebula-graph.com.cn/nebula-blog/DataModel02.png) ### Edge partitioning and storage amplification -In Nebula Graph, an edge corresponds to two key-value pairs on the hard disk. When there are lots of edges and each has many properties, storage amplification will be obvious. The storage format of edges is shown in the figure below. +In NebulaGraph, an edge corresponds to two key-value pairs on the hard disk. When there are lots of edges and each has many properties, storage amplification will be obvious. The storage format of edges is shown in the figure below. ![partitioning by edge](https://docs-cdn.nebula-graph.com.cn/figures/edge-division.png) @@ -136,19 +136,19 @@ In this example, ScrVertex connects DstVertex via EdgeA, forming the path of `(S EdgeA_Out and EdgeA_In are stored in storage layer with opposite directions, constituting EdgeA logically. EdgeA_Out is used for traversal requests starting from SrcVertex, such as `(a)-[]->()`; EdgeA_In is used for traversal requests starting from DstVertex, such as `()-[]->(a)`. -Like EdgeA_Out and EdgeA_In, Nebula Graph redundantly stores the information of each edge, which doubles the actual capacities needed for edge storage. The key corresponding to the edge occupies a small hard disk space, but the space occupied by Value is proportional to the length and amount of the property value. Therefore, it will occupy a relatively large hard disk space if the property value of the edge is large or there are many edge property values. +Like EdgeA_Out and EdgeA_In, NebulaGraph redundantly stores the information of each edge, which doubles the actual capacities needed for edge storage. The key corresponding to the edge occupies a small hard disk space, but the space occupied by Value is proportional to the length and amount of the property value. Therefore, it will occupy a relatively large hard disk space if the property value of the edge is large or there are many edge property values. To ensure the final consistency of the two key-value pairs when operating on edges, enable the [TOSS function](../../5.configurations-and-logs/1.configurations/3.graph-config.md ). After that, the operation will be performed in Partition x first where the out-edge is located, and then in Partition y where the in-edge is located. Finally, the result is returned. --> ### Partition algorithm -Nebula Graph uses a **static Hash** strategy to shard data through a modulo operation on vertex ID. All the out-keys, in-keys, and tag data will be placed in the same partition. In this way, query efficiency is increased dramatically. +NebulaGraph uses a **static Hash** strategy to shard data through a modulo operation on vertex ID. All the out-keys, in-keys, and tag data will be placed in the same partition. In this way, query efficiency is increased dramatically. !!! Note The number of partitions needs to be determined when users are creating a graph space since it cannot be changed afterward. Users are supposed to take into consideration the demands of future business when setting it. -When inserting into Nebula Graph, vertices and edges are distributed across different partitions. And the partitions are located on different machines. The number of partitions is set in the CREATE SPACE statement and cannot be changed afterward. +When inserting into NebulaGraph, vertices and edges are distributed across different partitions. And the partitions are located on different machines. The number of partitions is set in the CREATE SPACE statement and cannot be changed afterward. If certain vertices need to be placed on the same partition (i.e., on the same machine), see [Formula/code](https://github.com/vesoft-inc/nebula-common/blob/master/src/common/clients/meta/MetaClient.cpp). @@ -203,14 +203,14 @@ Failure: Scenario 1: Take a (space) cluster of a single replica as an example. I Raft and HDFS have different modes of duplication. Raft is based on a quorum vote, so the number of replicas cannot be even. ### Multi Group Raft -The Storage Service supports a distributed cluster architecture, so Nebula Graph implements Multi Group Raft according to Raft protocol. Each Raft group stores all the replicas of each partition. One replica is the leader, while others are followers. In this way, Nebula Graph achieves strong consistency and high availability. The functions of Raft are as follows. +The Storage Service supports a distributed cluster architecture, so NebulaGraph implements Multi Group Raft according to Raft protocol. Each Raft group stores all the replicas of each partition. One replica is the leader, while others are followers. In this way, NebulaGraph achieves strong consistency and high availability. The functions of Raft are as follows. -Nebula Graph uses Multi Group Raft to improve performance when there are many partitions because Raft-wal cannot be NULL. When there are too many partitions, costs will increase, such as storing information in Raft group, WAL files, or batch operation in low load. +NebulaGraph uses Multi Group Raft to improve performance when there are many partitions because Raft-wal cannot be NULL. When there are too many partitions, costs will increase, such as storing information in Raft group, WAL files, or batch operation in low load. There are two key points to implement the Multi Raft Group: @@ -224,7 +224,7 @@ There are two key points to implement the Multi Raft Group: ### Batch -For each partition, it is necessary to do a batch to improve throughput when writing the WAL serially. As Nebula Graph uses WAL to implement some special functions, batches need to be grouped, which is a feature of Nebula Graph. +For each partition, it is necessary to do a batch to improve throughput when writing the WAL serially. As NebulaGraph uses WAL to implement some special functions, batches need to be grouped, which is a feature of NebulaGraph. For example, lock-free CAS operations will execute after all the previous WALs are committed. So for a batch, if there are several WALs in CAS type, we need to divide this batch into several smaller groups and make sure they are committed serially. @@ -240,17 +240,17 @@ Raft listener can write the data into Elasticsearch cluster after receiving them ### Transfer Leadership -Transfer leadership is extremely important for balance. When moving a partition from one machine to another, Nebula Graph first checks if the source is a leader. If so, it should be moved to another peer. After data migration is completed, it is important to [balance leader distribution](../../8.service-tuning/load-balance.md) again. +Transfer leadership is extremely important for balance. When moving a partition from one machine to another, NebulaGraph first checks if the source is a leader. If so, it should be moved to another peer. After data migration is completed, it is important to [balance leader distribution](../../8.service-tuning/load-balance.md) again. When a transfer leadership command is committed, the leader will abandon its leadership and the followers will start a leader election. ### Peer changes -To avoid split-brain, when members in a Raft Group change, an intermediate state is required. In such a state, the quorum of the old group and new group always have an overlap. Thus it prevents the old or new group from making decisions unilaterally. To make it even simpler, in his doctoral thesis Diego Ongaro suggests adding or removing a peer once to ensure the overlap between the quorum of the new group and the old group. Nebula Graph also uses this approach, except that the way to add or remove a member is different. For details, please refer to addPeer/removePeer in the Raft Part class. +To avoid split-brain, when members in a Raft Group change, an intermediate state is required. In such a state, the quorum of the old group and new group always have an overlap. Thus it prevents the old or new group from making decisions unilaterally. To make it even simpler, in his doctoral thesis Diego Ongaro suggests adding or removing a peer once to ensure the overlap between the quorum of the new group and the old group. NebulaGraph also uses this approach, except that the way to add or remove a member is different. For details, please refer to addPeer/removePeer in the Raft Part class. ## Cache -The cache management of RocksDB can not cache vertices or edges on demand. Nebula Graph implements its own cache management for Storage, allowing you to set the storage cache size, content, etc. For more information, see [Storage cache configurations](../../5.configurations-and-logs/1.configurations/4.storage-config.md). +The cache management of RocksDB can not cache vertices or edges on demand. NebulaGraph implements its own cache management for Storage, allowing you to set the storage cache size, content, etc. For more information, see [Storage cache configurations](../../5.configurations-and-logs/1.configurations/4.storage-config.md). ## Differences with HDFS diff --git a/docs-2.0/1.introduction/3.vid.md b/docs-2.0/1.introduction/3.vid.md index 2b676a0ab17..2c44a7dc076 100644 --- a/docs-2.0/1.introduction/3.vid.md +++ b/docs-2.0/1.introduction/3.vid.md @@ -1,6 +1,6 @@ # VID -In Nebula Graph, a vertex is uniquely identified by its ID, which is called a VID or a Vertex ID. +In NebulaGraph, a vertex is uniquely identified by its ID, which is called a VID or a Vertex ID. ## Features @@ -8,7 +8,7 @@ In Nebula Graph, a vertex is uniquely identified by its ID, which is called a VI - A VID in a graph space is unique. It functions just as a primary key in a relational database. VIDs in different graph spaces are independent. -- The VID generation method must be set by users, because Nebula Graph does not provide auto increasing ID, or UUID. +- The VID generation method must be set by users, because NebulaGraph does not provide auto increasing ID, or UUID. - Vertices with the same VID will be identified as the same one. For example: @@ -22,7 +22,7 @@ In Nebula Graph, a vertex is uniquely identified by its ID, which is called a VI ## VID Operation -- Nebula Graph 1.x only supports `INT64` while Nebula Graph 2.x supports `INT64` and `FIXED_STRING()`. In `CREATE SPACE`, VID types can be set via `vid_type`. +- NebulaGraph 1.x only supports `INT64` while NebulaGraph 2.x supports `INT64` and `FIXED_STRING()`. In `CREATE SPACE`, VID types can be set via `vid_type`. - `id()` function can be used to specify or locate a VID. @@ -52,7 +52,7 @@ A VID is set when you [insert a vertex](../3.ngql-guide/12.vertex-statements/1.i ## Query `start vid` and global scan -In most cases, the execution plan of query statements in Nebula Graph (`MATCH`, `GO`, and `LOOKUP`) must query the `start vid` in a certain way. +In most cases, the execution plan of query statements in NebulaGraph (`MATCH`, `GO`, and `LOOKUP`) must query the `start vid` in a certain way. There are only two ways to locate `start vid`: diff --git a/docs-2.0/14.client/1.nebula-client.md b/docs-2.0/14.client/1.nebula-client.md index 895fbdad4b6..000a48c0a2a 100644 --- a/docs-2.0/14.client/1.nebula-client.md +++ b/docs-2.0/14.client/1.nebula-client.md @@ -1,16 +1,16 @@ # Clients overview -Nebula Graph supports multiple types of clients for users to connect to and manage the Nebula Graph database. +NebulaGraph supports multiple types of clients for users to connect to and manage the NebulaGraph database. - [Nebula Console](../nebula-console.md): the native CLI client -- [Nebula CPP](3.nebula-cpp-client.md): the Nebula Graph client for C++ +- [Nebula CPP](3.nebula-cpp-client.md): the NebulaGraph client for C++ -- [Nebula Java](4.nebula-java-client.md): the Nebula Graph client for Java +- [Nebula Java](4.nebula-java-client.md): the NebulaGraph client for Java -- [Nebula Python](5.nebula-python-client.md): the Nebula Graph client for Python +- [Nebula Python](5.nebula-python-client.md): the NebulaGraph client for Python -- [Nebula Go](6.nebula-go-client.md): the Nebula Graph client for Golang +- [Nebula Go](6.nebula-go-client.md): the NebulaGraph client for Golang !!! note @@ -18,4 +18,4 @@ Nebula Graph supports multiple types of clients for users to connect to and mana !!! caution - Other clients(such as [Nebula PHP](https://github.com/nebula-contrib/nebula-php), [Nebula Node](https://github.com/nebula-contrib/nebula-node), [Nebula .net](https://github.com/nebula-contrib/nebula-net), [Nebula JDBC](https://github.com/nebula-contrib/nebula-jdbc), [NORM - Nebula Graph's Golang ORM](https://github.com/zhihu/norm), and [Graph-Ocean - Nebula Graph's Java ORM](https://github.com/nebula-contrib/graph-ocean))can also be used to connect to and manage Nebula Graph, but there is no uptime guarantee. \ No newline at end of file + Other clients(such as [Nebula PHP](https://github.com/nebula-contrib/nebula-php), [Nebula Node](https://github.com/nebula-contrib/nebula-node), [Nebula .net](https://github.com/nebula-contrib/nebula-net), [Nebula JDBC](https://github.com/nebula-contrib/nebula-jdbc), [NORM - NebulaGraph's Golang ORM](https://github.com/zhihu/norm), and [Graph-Ocean - NebulaGraph's Java ORM](https://github.com/nebula-contrib/graph-ocean))can also be used to connect to and manage NebulaGraph, but there is no uptime guarantee. \ No newline at end of file diff --git a/docs-2.0/14.client/3.nebula-cpp-client.md b/docs-2.0/14.client/3.nebula-cpp-client.md index b25b07d1f3a..9565d9620c1 100644 --- a/docs-2.0/14.client/3.nebula-cpp-client.md +++ b/docs-2.0/14.client/3.nebula-cpp-client.md @@ -1,14 +1,14 @@ # Nebula CPP -[Nebula CPP](https://github.com/vesoft-inc/nebula-cpp/tree/{{cpp.branch}}) is a C++ client for connecting to and managing the Nebula Graph database. +[Nebula CPP](https://github.com/vesoft-inc/nebula-cpp/tree/{{cpp.branch}}) is a C++ client for connecting to and managing the NebulaGraph database. ## Limitations You have installed C++ and GCC 4.8 or later versions. -## Compatibility with Nebula Graph +## Compatibility with NebulaGraph -|Nebula Graph version|Nebula CPP version| +|NebulaGraph version|Nebula CPP version| |:---|:---| |{{ nebula.release }}|{{cpp.release}}| |2.6.x|2.5.0| @@ -98,9 +98,9 @@ Compile the CPP file to an executable file, then you can use it. The following s $ LIBRARY_PATH=:$LIBRARY_PATH g++ -std=c++11 SessionExample.cpp -I -lnebula_graph_client -o session_example ``` - - `library_folder_path`: The storage path of the Nebula Graph dynamic libraries. The default path is `/usr/local/nebula/lib64`. + - `library_folder_path`: The storage path of the NebulaGraph dynamic libraries. The default path is `/usr/local/nebula/lib64`. - - `include_folder_path`: The storage of the Nebula Graph header files. The default path is `/usr/local/nebula/include`. + - `include_folder_path`: The storage of the NebulaGraph header files. The default path is `/usr/local/nebula/include`. For example: diff --git a/docs-2.0/14.client/4.nebula-java-client.md b/docs-2.0/14.client/4.nebula-java-client.md index 23ee7cc55da..41d481e2909 100644 --- a/docs-2.0/14.client/4.nebula-java-client.md +++ b/docs-2.0/14.client/4.nebula-java-client.md @@ -1,14 +1,14 @@ # Nebula Java -[Nebula Java](https://github.com/vesoft-inc/nebula-java/tree/{{java.branch}}) is a Java client for connecting to and managing the Nebula Graph database. +[Nebula Java](https://github.com/vesoft-inc/nebula-java/tree/{{java.branch}}) is a Java client for connecting to and managing the NebulaGraph database. ## Prerequisites You have installed Java 8.0 or later versions. -## Compatibility with Nebula Graph +## Compatibility with NebulaGraph -|Nebula Graph version|Nebula Java version| +|NebulaGraph version|Nebula Java version| |:---|:---| |{{ nebula.release }}|{{java.release}}| |2.6.x|2.6.1| diff --git a/docs-2.0/14.client/5.nebula-python-client.md b/docs-2.0/14.client/5.nebula-python-client.md index ff957d5cdc4..788c8585bba 100644 --- a/docs-2.0/14.client/5.nebula-python-client.md +++ b/docs-2.0/14.client/5.nebula-python-client.md @@ -1,14 +1,14 @@ # Nebula Python -[Nebula Python](https://github.com/vesoft-inc/nebula-python) is a Python client for connecting to and managing the Nebula Graph database. +[Nebula Python](https://github.com/vesoft-inc/nebula-python) is a Python client for connecting to and managing the NebulaGraph database. ## Prerequisites You have installed Python 3.6 or later versions. -## Compatibility with Nebula Graph +## Compatibility with NebulaGraph -|Nebula Graph version|Nebula Python version| +|NebulaGraph version|Nebula Python version| |:---|:---| |{{ nebula.release }}|{{python.release}}| |2.6.x|2.6.0| diff --git a/docs-2.0/14.client/6.nebula-go-client.md b/docs-2.0/14.client/6.nebula-go-client.md index eac32a1c6b7..c79cd162165 100644 --- a/docs-2.0/14.client/6.nebula-go-client.md +++ b/docs-2.0/14.client/6.nebula-go-client.md @@ -1,14 +1,14 @@ # Nebula Go -[Nebula Go](https://github.com/vesoft-inc/nebula-go/tree/{{go.branch}}) is a Golang client for connecting to and managing the Nebula Graph database. +[Nebula Go](https://github.com/vesoft-inc/nebula-go/tree/{{go.branch}}) is a Golang client for connecting to and managing the NebulaGraph database. ## Prerequisites You have installed Golang 1.13 or later versions. -## Compatibility with Nebula Graph +## Compatibility with NebulaGraph -|Nebula Graph version|Nebula Go version| +|NebulaGraph version|Nebula Go version| |:---|:---| |{{ nebula.release }}|{{go.release}}| |2.6.x|2.6.0| diff --git a/docs-2.0/15.contribution/how-to-contribute.md b/docs-2.0/15.contribution/how-to-contribute.md index e0e2fd2ab70..1b2c8b58347 100644 --- a/docs-2.0/15.contribution/how-to-contribute.md +++ b/docs-2.0/15.contribution/how-to-contribute.md @@ -28,7 +28,7 @@ This method applies to contribute codes, modify multiple documents in batches, o ## Step 1: Fork in the github.com -The Nebula Graph project has many [repositories](https://github.com/vesoft-inc). Take [the nebul repository](https://github.com/vesoft-inc/nebula) for example: +The NebulaGraph project has many [repositories](https://github.com/vesoft-inc). Take [the nebul repository](https://github.com/vesoft-inc/nebula) for example: 1. Visit [https://github.com/vesoft-inc/nebula](https://github.com/vesoft-inc/nebula). @@ -75,7 +75,7 @@ The Nebula Graph project has many [repositories](https://github.com/vesoft-inc). 4. (Optional) Define a pre-commit hook. - Please link the Nebula Graph pre-commit hook into the `.git` directory. + Please link the NebulaGraph pre-commit hook into the `.git` directory. This hook checks the commits for formatting, building, doc generation, etc. @@ -123,7 +123,7 @@ The Nebula Graph project has many [repositories](https://github.com/vesoft-inc). - Code style - **Nebula Graph** adopts `cpplint` to make sure that the project conforms to Google's coding style guides. The checker will be implemented before the code is committed. + **NebulaGraph** adopts `cpplint` to make sure that the project conforms to Google's coding style guides. The checker will be implemented before the code is committed. - Unit tests requirements @@ -131,7 +131,7 @@ The Nebula Graph project has many [repositories](https://github.com/vesoft-inc). - Build your code with unit tests enabled - For more information, see [Install Nebula Graph by compiling the source code](../4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md). + For more information, see [Install NebulaGraph by compiling the source code](../4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md). !!! Note @@ -192,7 +192,7 @@ For detailed methods, see [How to add test cases](https://github.com/vesoft-inc/ ### Step 1: Confirm the project donation -Contact the official Nebula Graph staff via email, WeChat, Slack, etc. to confirm the donation project. The project will be donated to the [Nebula Contrib organization](https://github.com/nebula-contrib). +Contact the official NebulaGraph staff via email, WeChat, Slack, etc. to confirm the donation project. The project will be donated to the [Nebula Contrib organization](https://github.com/nebula-contrib). Email address: info@vesoft.com @@ -202,7 +202,7 @@ Slack: [Join Slack](https://join.slack.com/t/nebulagraph/shared_invite/zt-7ybeju ### Step 2: Get the information of the project recipient -The Nebula Graph official staff will give the recipient ID of the Nebula Contrib project. +The NebulaGraph official staff will give the recipient ID of the Nebula Contrib project. ### Step 3: Donate a project diff --git a/docs-2.0/2.quick-start/1.quick-start-workflow.md b/docs-2.0/2.quick-start/1.quick-start-workflow.md index ef6e33b9f00..477f06c13f0 100644 --- a/docs-2.0/2.quick-start/1.quick-start-workflow.md +++ b/docs-2.0/2.quick-start/1.quick-start-workflow.md @@ -1,27 +1,27 @@ # Quick start workflow -The quick start introduces the simplest workflow to use Nebula Graph, including deploying Nebula Graph, connecting to Nebula Graph, and doing basic CRUD. +The quick start introduces the simplest workflow to use NebulaGraph, including deploying NebulaGraph, connecting to NebulaGraph, and doing basic CRUD. ## Steps -Users can quickly deploy and use Nebula Graph in the following steps. +Users can quickly deploy and use NebulaGraph in the following steps. -1. [Deploy Nebula Graph](2.install-nebula-graph.md) +1. [Deploy NebulaGraph](2.install-nebula-graph.md) - Users can use the RPM or DEB file to quickly deploy Nebula Graph. For other deployment methods and the corresponding preparations, see the **Deployment and installation** chapter. + Users can use the RPM or DEB file to quickly deploy NebulaGraph. For other deployment methods and the corresponding preparations, see the **Deployment and installation** chapter. -2. [Start Nebula Graph](5.start-stop-service.md) +2. [Start NebulaGraph](5.start-stop-service.md) - Users need to start Nebula Graph after deployment. + Users need to start NebulaGraph after deployment. -3. [Connect to Nebula Graph](3.connect-to-nebula-graph.md) +3. [Connect to NebulaGraph](3.connect-to-nebula-graph.md) - Then users can use clients to connect to Nebula Graph. Nebula Graph supports a variety of clients. This topic will describe how to use Nebula Console to connect to Nebula Graph. + Then users can use clients to connect to NebulaGraph. NebulaGraph supports a variety of clients. This topic will describe how to use Nebula Console to connect to NebulaGraph. 4. [Register the Storage Service](3.1add-storage-hosts.md) - When connecting to Nebula Graph for the first time, users must register the Storage Service before querying data. + When connecting to NebulaGraph for the first time, users must register the Storage Service before querying data. -5. [CRUD in Nebula Graph](4.nebula-graph-crud.md) +5. [CRUD in NebulaGraph](4.nebula-graph-crud.md) - Users can use nGQL (Nebula Graph Query Language) to run CRUD after connecting to Nebula Graph. + Users can use nGQL (NebulaGraph Query Language) to run CRUD after connecting to NebulaGraph. diff --git a/docs-2.0/2.quick-start/2.install-nebula-graph.md b/docs-2.0/2.quick-start/2.install-nebula-graph.md index 3771365834c..28018072dc2 100644 --- a/docs-2.0/2.quick-start/2.install-nebula-graph.md +++ b/docs-2.0/2.quick-start/2.install-nebula-graph.md @@ -1,4 +1,4 @@ -# Step 1: Install Nebula Graph +# Step 1: Install NebulaGraph {% include "/source_install-nebula-graph-by-rpm-or-deb.md" %} diff --git a/docs-2.0/2.quick-start/3.1add-storage-hosts.md b/docs-2.0/2.quick-start/3.1add-storage-hosts.md index 3c9475ac3da..d2fbb3b474d 100644 --- a/docs-2.0/2.quick-start/3.1add-storage-hosts.md +++ b/docs-2.0/2.quick-start/3.1add-storage-hosts.md @@ -1,15 +1,15 @@ # Register the Storage Service -When connecting to Nebula Graph for the first time, you have to add the Storage hosts, and confirm that all the hosts are online. +When connecting to NebulaGraph for the first time, you have to add the Storage hosts, and confirm that all the hosts are online. !!! compatibility - - Starting from Nebula Graph 3.0.0, you have to run `ADD HOSTS` before reading or writing data into the Storage Service. + - Starting from NebulaGraph 3.0.0, you have to run `ADD HOSTS` before reading or writing data into the Storage Service. - In earlier versions, `ADD HOSTS` is neither needed nor supported. ## Prerequisites -You have [connnected to Nebula Graph](3.connect-to-nebula-graph.md). +You have [connnected to NebulaGraph](3.connect-to-nebula-graph.md). ## Steps diff --git a/docs-2.0/2.quick-start/3.connect-to-nebula-graph.md b/docs-2.0/2.quick-start/3.connect-to-nebula-graph.md index 9a07a1262a9..11a5ba57da8 100644 --- a/docs-2.0/2.quick-start/3.connect-to-nebula-graph.md +++ b/docs-2.0/2.quick-start/3.connect-to-nebula-graph.md @@ -1,4 +1,4 @@ -# Step 3: Connect to Nebula Graph +# Step 3: Connect to NebulaGraph {% include "/source_connect-to-nebula-graph.md" %} diff --git a/docs-2.0/2.quick-start/4.nebula-graph-crud.md b/docs-2.0/2.quick-start/4.nebula-graph-crud.md index 9e93de6446f..9fdc5d4f372 100644 --- a/docs-2.0/2.quick-start/4.nebula-graph-crud.md +++ b/docs-2.0/2.quick-start/4.nebula-graph-crud.md @@ -1,16 +1,16 @@ # Step 4: Use nGQL (CRUD) -This topic will describe the basic CRUD operations in Nebula Graph. +This topic will describe the basic CRUD operations in NebulaGraph. For more information, see [nGQL guide](../3.ngql-guide/1.nGQL-overview/1.overview.md). -## Graph space and Nebula Graph schema +## Graph space and NebulaGraph schema -A Nebula Graph instance consists of one or more graph spaces. Graph spaces are physically isolated from each other. You can use different graph spaces in the same instance to store different datasets. +A NebulaGraph instance consists of one or more graph spaces. Graph spaces are physically isolated from each other. You can use different graph spaces in the same instance to store different datasets. -![Nebula Graph and graph spaces](https://docs-cdn.nebula-graph.com.cn/docs-2.0/2.quick-start/nebula-graph-instance-and-graph-spaces.png) +![NebulaGraph and graph spaces](https://docs-cdn.nebula-graph.com.cn/docs-2.0/2.quick-start/nebula-graph-instance-and-graph-spaces.png) -To insert data into a graph space, define a schema for the graph database. Nebula Graph schema is based on the following components. +To insert data into a graph space, define a schema for the graph database. NebulaGraph schema is based on the following components. | Schema component | Description | | ---------------- | ------------| @@ -29,7 +29,7 @@ In this topic, we will use the following dataset to demonstrate basic CRUD opera !!! caution - In Nebula Graph, the following `CREATE` or `ALTER` commands are implemented in an async way and take effect in the **next** heartbeat cycle. Otherwise, an error will be returned. To make sure the follow-up operations work as expected, Wait for two heartbeat cycles, i.e., 20 seconds. + In NebulaGraph, the following `CREATE` or `ALTER` commands are implemented in an async way and take effect in the **next** heartbeat cycle. Otherwise, an error will be returned. To make sure the follow-up operations work as expected, Wait for two heartbeat cycles, i.e., 20 seconds. * `CREATE SPACE` * `CREATE TAG` @@ -227,7 +227,7 @@ You can use the `INSERT` statement to insert vertices or edges based on existing * The [LOOKUP](../3.ngql-guide/7.general-query-statements/5.lookup.md) statement is based on [indexes](#about_indexes). It is used together with the `WHERE` clause to search for the data that meet the specific conditions. -* The [MATCH](../3ngql-guide/../3.ngql-guide/7.general-query-statements/2.match.md) statement is the most commonly used statement for graph data querying. It can describe all kinds of graph patterns, but it relies on [indexes](#about_indexes) to match data patterns in Nebula Graph. Therefore, its performance still needs optimization. +* The [MATCH](../3ngql-guide/../3.ngql-guide/7.general-query-statements/2.match.md) statement is the most commonly used statement for graph data querying. It can describe all kinds of graph patterns, but it relies on [indexes](#about_indexes) to match data patterns in NebulaGraph. Therefore, its performance still needs optimization. ### nGQL syntax diff --git a/docs-2.0/2.quick-start/5.start-stop-service.md b/docs-2.0/2.quick-start/5.start-stop-service.md index d285815d892..373ed96cf5a 100644 --- a/docs-2.0/2.quick-start/5.start-stop-service.md +++ b/docs-2.0/2.quick-start/5.start-stop-service.md @@ -1,4 +1,4 @@ -# Step 2: Manage Nebula Graph Service +# Step 2: Manage NebulaGraph Service {% include "/source_manage-service.md" %} diff --git a/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md b/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md index 439becc6467..dd7f42b0a0f 100644 --- a/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md +++ b/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md @@ -340,7 +340,7 @@ | Statement | Syntax | Example | Description | | ------------------------------------------------------------ | ------------------------------------------------- | ------------------------------------ | -------------------------------------------------------- | | [SHOW CHARSET](../3.ngql-guide/7.general-query-statements/6.show/1.show-charset.md) | `SHOW CHARSET` | `SHOW CHARSET` | Shows the available character sets. | - | [SHOW COLLATION](../3.ngql-guide/7.general-query-statements/6.show/2.show-collation.md) | `SHOW COLLATION` | `SHOW COLLATION` | Shows the collations supported by Nebula Graph. | + | [SHOW COLLATION](../3.ngql-guide/7.general-query-statements/6.show/2.show-collation.md) | `SHOW COLLATION` | `SHOW COLLATION` | Shows the collations supported by NebulaGraph. | | [SHOW CREATE SPACE](../3.ngql-guide/7.general-query-statements/6.show/4.show-create-space.md) | `SHOW CREATE SPACE ` | `SHOW CREATE SPACE basketballplayer` | Shows the creating statement of the specified graph space. | | [SHOW CREATE TAG/EDGE](../3.ngql-guide/7.general-query-statements/6.show/5.show-create-tag-edge.md) | `SHOW CREATE {TAG | EDGE }` | `SHOW CREATE TAG player` | Shows the basic information of the specified tag. | | [SHOW HOSTS](../3.ngql-guide/7.general-query-statements/6.show/6.show-hosts.md) | `SHOW HOSTS [GRAPH | STORAGE | META]` | `SHOW HOSTS`
`SHOW HOSTS GRAPH` | Shows the host and version information of Graph Service, Storage Service, and Meta Service. | @@ -349,7 +349,7 @@ | [SHOW PARTS](../3.ngql-guide/7.general-query-statements/6.show/9.show-parts.md) | `SHOW PARTS []` | `SHOW PARTS` | Shows the information of a specified partition or all partitions in a graph space. | | [SHOW ROLES](../3.ngql-guide/7.general-query-statements/6.show/10.show-roles.md) | `SHOW ROLES IN ` | `SHOW ROLES in basketballplayer` | Shows the roles that are assigned to a user account. | | [SHOW SNAPSHOTS](../3.ngql-guide/7.general-query-statements/6.show/11.show-snapshots.md) | `SHOW SNAPSHOTS` | `SHOW SNAPSHOTS` | Shows the information of all the snapshots. - | [SHOW SPACES](../3.ngql-guide/7.general-query-statements/6.show/12.show-spaces.md) | `SHOW SPACES` | `SHOW SPACES` | Shows existing graph spaces in Nebula Graph. | + | [SHOW SPACES](../3.ngql-guide/7.general-query-statements/6.show/12.show-spaces.md) | `SHOW SPACES` | `SHOW SPACES` | Shows existing graph spaces in NebulaGraph. | | [SHOW STATS](../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) | `SHOW STATS` | `SHOW STATS` | Shows the statistics of the graph space collected by the latest `STATS` job. | | [SHOW TAGS/EDGES](../3.ngql-guide/7.general-query-statements/6.show/15.show-tags-edges.md) | `SHOW TAGS | EDGES` | `SHOW TAGS`,`SHOW EDGES` | Shows all the tags in the current graph space. | | [SHOW USERS](../3.ngql-guide/7.general-query-statements/6.show/16.show-users.md) | `SHOW USERS` | `SHOW USERS` | Shows the user information. | @@ -382,7 +382,7 @@ | [CREATE SPACE](../3.ngql-guide/9.space-statements/1.create-space.md) | `CREATE SPACE [IF NOT EXISTS] ( [partition_num = ,] [replica_factor = ,] vid_type = {FIXED_STRING() | INT[64]} ) [COMMENT = '']` | `CREATE SPACE my_space_1 (vid_type=FIXED_STRING(30))` | Creates a graph space with | | [CREATE SPACE](../3.ngql-guide/9.space-statements/1.create-space.md) | `CREATE SPACE AS ` | `CREATE SPACE my_space_4 as my_space_3` | Clone a graph. space. | | [USE](../3.ngql-guide/9.space-statements/2.use-space.md) | `USE ` | `USE space1` | Specifies a graph space as the current working graph space for subsequent queries. | -| [SHOW SPACES](../3.ngql-guide/9.space-statements/3.show-spaces.md) | `SHOW SPACES` | `SHOW SPACES` | Lists all the graph spaces in the Nebula Graph examples. | +| [SHOW SPACES](../3.ngql-guide/9.space-statements/3.show-spaces.md) | `SHOW SPACES` | `SHOW SPACES` | Lists all the graph spaces in the NebulaGraph examples. | | [DESCRIBE SPACE](../3.ngql-guide/9.space-statements/4.describe-space.md) | `DESC[RIBE] SPACE ` | `DESCRIBE SPACE basketballplayer` | Returns the information about the specified graph space.息。 | | [CLEAR SPACE](../3.ngql-guide/9.space-statements/6.clear-space.md) | `CLEAR SPACE [IF EXISTS] ` | Deletes the vertices and edges in a graph space, but does not delete the graph space itself and the schema information. | | [DROP SPACE](../3.ngql-guide/9.space-statements/5.drop-space.md) | `DROP SPACE [IF EXISTS] ` | `DROP SPACE basketballplayer` | Deletes everything in the specified graph space. | @@ -412,7 +412,7 @@ | Statement | Syntax | Example | Description | | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -| [INSERT VERTEX](../3.ngql-guide/12.vertex-statements/1.insert-vertex.md) | `INSERT VERTEX [IF NOT EXISTS] [tag_props, [tag_props] ...] VALUES : ([prop_value_list])` | `INSERT VERTEX t2 (name, age) VALUES "13":("n3", 12), "14":("n4", 8)` | Inserts one or more vertices into a graph space in Nebula Graph. | +| [INSERT VERTEX](../3.ngql-guide/12.vertex-statements/1.insert-vertex.md) | `INSERT VERTEX [IF NOT EXISTS] [tag_props, [tag_props] ...] VALUES : ([prop_value_list])` | `INSERT VERTEX t2 (name, age) VALUES "13":("n3", 12), "14":("n4", 8)` | Inserts one or more vertices into a graph space in NebulaGraph. | | [DELETE VERTEX](../3.ngql-guide/12.vertex-statements/4.delete-vertex.md) | `DELETE VERTEX [, ...]` | `DELETE VERTEX "team1"` | Deletes vertices and the related incoming and outgoing edges of the vertices. | | [UPDATE VERTEX](../3.ngql-guide/12.vertex-statements/2.update-vertex.md) | `UPDATE VERTEX ON SET [WHEN ] [YIELD ]` | `UPDATE VERTEX ON player "player101" SET age = age + 2 ` | Updates properties on tags of a vertex. | | [UPSERT VERTEX](../3.ngql-guide/12.vertex-statements/3.upsert-vertex.md) | `UPSERT VERTEX ON SET [WHEN ] [YIELD ]` | `UPSERT VERTEX ON player "player667" SET age = 31` | The `UPSERT` statement is a combination of `UPDATE` and `INSERT`. You can use `UPSERT VERTEX` to update the properties of a vertex if it exists or insert a new vertex if it does not exist. | @@ -421,7 +421,7 @@ | Statement | Syntax | Example | Description | | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -| [INSERT EDGE](../3.ngql-guide/13.edge-statements/1.insert-edge.md) | `INSERT EDGE [IF NOT EXISTS] ( ) VALUES -> [@] : ( ) [, -> [@] : ( ), ...]` | `INSERT EDGE e2 (name, age) VALUES "11"->"13":("n1", 1)` | Inserts an edge or multiple edges into a graph space from a source vertex (given by src_vid) to a destination vertex (given by dst_vid) with a specific rank in Nebula Graph. | +| [INSERT EDGE](../3.ngql-guide/13.edge-statements/1.insert-edge.md) | `INSERT EDGE [IF NOT EXISTS] ( ) VALUES -> [@] : ( ) [, -> [@] : ( ), ...]` | `INSERT EDGE e2 (name, age) VALUES "11"->"13":("n1", 1)` | Inserts an edge or multiple edges into a graph space from a source vertex (given by src_vid) to a destination vertex (given by dst_vid) with a specific rank in NebulaGraph. | | [DELETE EDGE](../3.ngql-guide/12.vertex-statements/3.upsert-vertex.md) | `DELETE EDGE -> [@] [, -> [@] ...]` | `DELETE EDGE serve "player100" -> "team204"@0` | Deletes one edge or multiple edges at a time. | | [UPDATE EDGE](../3.ngql-guide/13.edge-statements/2.update-edge.md) | `UPDATE EDGE ON -> [@] SET [WHEN ] [YIELD ]` | `UPDATE EDGE ON serve "player100" -> "team204"@0 SET start_year = start_year + 1` | Updates properties on an edge. | | [UPSERT EDGE](../3.ngql-guide/12.vertex-statements/3.upsert-vertex.md) | `UPSERT EDGE ON -> [@rank] SET [WHEN ] [YIELD ]` | `UPSERT EDGE on serve "player666" -> "team200"@0 SET end_year = 2021` | The `UPSERT` statement is a combination of `UPDATE` and `INSERT`. You can use `UPSERT EDGE` to update the properties of an edge if it exists or insert a new edge if it does not exist. | diff --git a/docs-2.0/20.appendix/0.FAQ.md b/docs-2.0/20.appendix/0.FAQ.md index e8fe089c7ec..033d6feaab7 100644 --- a/docs-2.0/20.appendix/0.FAQ.md +++ b/docs-2.0/20.appendix/0.FAQ.md @@ -1,14 +1,14 @@ # FAQ -This topic lists the frequently asked questions for using Nebula Graph {{ nebula.release }}. You can use the search box in the help center or the search function of the browser to match the questions you are looking for. +This topic lists the frequently asked questions for using NebulaGraph {{ nebula.release }}. You can use the search box in the help center or the search function of the browser to match the questions you are looking for. -If the solutions described in this topic cannot solve your problems, ask for help on the [Nebula Graph forum](https://discuss.nebula-graph.io/) or submit an issue on [GitHub issue](https://github.com/vesoft-inc/nebula/issues). +If the solutions described in this topic cannot solve your problems, ask for help on the [NebulaGraph forum](https://discuss.nebula-graph.io/) or submit an issue on [GitHub issue](https://github.com/vesoft-inc/nebula/issues). ## About manual updates ### "Why is the behavior in the manual not consistent with the system?" -Nebula Graph is still under development. Its behavior changes from time to time. Users can submit an [issue](https://github.com/vesoft-inc/nebula/issues/new) to inform the team if the manual and the system are not consistent. +NebulaGraph is still under development. Its behavior changes from time to time. Users can submit an [issue](https://github.com/vesoft-inc/nebula/issues/new) to inform the team if the manual and the system are not consistent. !!! note @@ -22,35 +22,35 @@ Nebula Graph is still under development. Its behavior changes from time to time. !!! compatibility "`X` version compatibility" - Neubla Graph {{ nebula.release }} is **not compatible** with Nebula Graph 1.x nor 2.0-RC in both data formats and RPC-protocols, and **vice versa**. The service process may **quit** if using an **lower version** client to connect to a **higher version** server. + Neubla Graph {{ nebula.release }} is **not compatible** with NebulaGraph 1.x nor 2.0-RC in both data formats and RPC-protocols, and **vice versa**. The service process may **quit** if using an **lower version** client to connect to a **higher version** server. - To upgrade data formats, see [Upgrade Nebula Graph to the current version](../4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md). + To upgrade data formats, see [Upgrade NebulaGraph to the current version](../4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md). Users must upgrade [all clients](../20.appendix/6.eco-tool-version.md). ## About execution errors ### "How to resolve the error `SemanticError: Missing yield clause.`?" -Starting with Nebula Graph 3.0.0, the statements `LOOKUP`, `GO`, and `FETCH` must output results with the `YIELD` clause. For more information, see [YIELD](../3.ngql-guide/8.clauses-and-options/yield.md). +Starting with NebulaGraph 3.0.0, the statements `LOOKUP`, `GO`, and `FETCH` must output results with the `YIELD` clause. For more information, see [YIELD](../3.ngql-guide/8.clauses-and-options/yield.md). ### "How to resolve the error `Host not enough!`?" -From Nebula Graph version 3.0.0, the Storage services added in the configuration files **CANNOT** be read or written directly. The configuration files only register the Storage services into the Meta services. You must run the `ADD HOSTS` command to read and write data on Storage servers. For more information, see [Manage Storage hosts](../4.deployment-and-installation/manage-storage-host.md). +From NebulaGraph version 3.0.0, the Storage services added in the configuration files **CANNOT** be read or written directly. The configuration files only register the Storage services into the Meta services. You must run the `ADD HOSTS` command to read and write data on Storage servers. For more information, see [Manage Storage hosts](../4.deployment-and-installation/manage-storage-host.md). ### "How to resolve the error `To get the property of the vertex in 'v.age', should use the format 'var.tag.prop'`?" -From Nebula Graph version 3.0.0, patterns support matching multiple tags at the same time, so you need to specify a tag name when querying properties. The original statement `RETURN variable_name.property_name` is changed to `RETURN variable_name..property_name`. +From NebulaGraph version 3.0.0, patterns support matching multiple tags at the same time, so you need to specify a tag name when querying properties. The original statement `RETURN variable_name.property_name` is changed to `RETURN variable_name..property_name`. + -### 3.3 Configure Nebula Graph +### 3.3 Configure NebulaGraph | Document | | ------------------------------------------------------------ | @@ -119,7 +119,7 @@ This topic is for anyone interested in learning more about Nebula Graph. You can | Document | | ------------------------------------------------------------ | - | [Nebula Graph metrics](../6.monitor-and-metrics/1.query-performance-metrics.md) | + | [NebulaGraph metrics](../6.monitor-and-metrics/1.query-performance-metrics.md) | | [RocksDB statistics](../6.monitor-and-metrics/2.rocksdb-statistics.md) | - Data snapshot @@ -220,11 +220,11 @@ This topic is for anyone interested in learning more about Nebula Graph. You can | Document | | ------------------------------------------------------------ | | [Handling Tens of Billions of Threat Intelligence Data with Graph Database at Kuaishou](https://nebula-graph.io/posts/kuaishou-security-intelligence-platform-with-nebula-graph/) | - | [Import data from Neo4j to Nebula Graph via Nebula Exchange: Best Practices](https://nebula-graph.io/posts/neo4j-nebula-graph-import-best-practice/) | - | [Hands-On Experience: Import Data to Nebula Graph with Spark](https://nebula-graph.io/posts/best-practices-import-data-spark-nebula-graph/) | + | [Import data from Neo4j to NebulaGraph via Nebula Exchange: Best Practices](https://nebula-graph.io/posts/neo4j-nebula-graph-import-best-practice/) | + | [Hands-On Experience: Import Data to NebulaGraph with Spark](https://nebula-graph.io/posts/best-practices-import-data-spark-nebula-graph/) | | [How to Select a Graph Database: Best Practices at RoyalFlush](https://nebula-graph.io/posts/how-to-select-a-graph-database/) | | [Practicing Nebula Operator on Cloud](https://nebula-graph.io/posts/nebula-operator-practice/) | - | [Using Ansible to Automate Deployment of Nebula Graph Cluster](https://nebula-graph.io/posts/deploy-nebula-graph-with-ansible/) | + | [Using Ansible to Automate Deployment of NebulaGraph Cluster](https://nebula-graph.io/posts/deploy-nebula-graph-with-ansible/) | ## 6. FAQ @@ -236,20 +236,20 @@ This topic is for anyone interested in learning more about Nebula Graph. You can ## 7. Practical tasks -You can check if you have mastered Nebula Graph by completing the following practical tasks. +You can check if you have mastered NebulaGraph by completing the following practical tasks. | Task | Reference | | ------------------------------------------------------- | ------------------------------------------------------------ | - | Compile the source code of Nebula Graph | [Install Nebula Graph by compiling the source code](../4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md) | + | Compile the source code of NebulaGraph | [Install NebulaGraph by compiling the source code](../4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md) | | Deploy Studio, Dashboard, and Explorer | [Deploy Studio](../nebula-studio/deploy-connect/st-ug-deploy.md), [Deploy Dashboard](../nebula-dashboard/2.deploy-dashboard.md), and [Deploy Explorer](../nebula-explorer/deploy-connect/ex-ug-deploy.md) | - | Load test Nebula Graph with K6 | [Nebula Bench](../nebula-bench.md) | + | Load test NebulaGraph with K6 | [Nebula Bench](../nebula-bench.md) | | Query LDBC data (such as queries for vertices, paths, or subgraphs.) | [LDBC](chrome-extension://gfbliohnnapiefjpjlpjnehglfpaknnc/pages/pdf_viewer.html?r=http://ldbcouncil.org/ldbc_snb_docs/ldbc-snb-specification.pdf) and [interactive-short-1.cypher](https://github.com/ldbc/ldbc_snb_interactive/blob/main/cypher/queries/interactive-short-1.cypher) | -## 8. Get Nebula Graph Certifications +## 8. Get NebulaGraph Certifications -Now you could get Nebula Graph Certifications from [Nebula Academy](https://academic.nebula-graph.io). +Now you could get NebulaGraph Certifications from [Nebula Academy](https://academic.nebula-graph.io). -- Nebula Graph Certified Insider(NGCI): The NGCI certification provides a birdview to graph databases and the Nebula Graph database. Passing NGCI shows that you have a good understanding of Nebula Graph. +- NebulaGraph Certified Insider(NGCI): The NGCI certification provides a birdview to graph databases and the NebulaGraph database. Passing NGCI shows that you have a good understanding of NebulaGraph. -- Nebula Graph Certified Professional(NGCP): The NGCP certification drives you deep into the Nebula Graph database and its ecosystem, providing a 360-degree view of the leading-edge graph database. Passing NGCP proves that you are a professional with a profound understanding of Nebula Graph. +- NebulaGraph Certified Professional(NGCP): The NGCP certification drives you deep into the NebulaGraph database and its ecosystem, providing a 360-degree view of the leading-edge graph database. Passing NGCP proves that you are a professional with a profound understanding of NebulaGraph. diff --git a/docs-2.0/20.appendix/release-note.md b/docs-2.0/20.appendix/release-note.md index fcf54e5c9f0..0632d54fa66 100644 --- a/docs-2.0/20.appendix/release-note.md +++ b/docs-2.0/20.appendix/release-note.md @@ -1,4 +1,4 @@ -# Nebula Graph {{ nebula.release }} release notes +# NebulaGraph {{ nebula.release }} release notes ## Enhancement @@ -14,14 +14,14 @@ - Graph spaces are physically deleted after using `DROP SPACE`. [#3913](https://github.com/vesoft-inc/nebula/pull/3913) - Optimized number parsing in date time, date, time. [#3797](https://github.com/vesoft-inc/nebula/pull/3797) - Added the `toSet` function which converts `LIST` or `SET` to `SET`. [#3594](https://github.com/vesoft-inc/nebula/pull/3594) -- nGQL statements can be used to display the HTTP port of Nebula Graph services and the HTTP2 port has been disabled. [#3808](https://github.com/vesoft-inc/nebula/pull/3808) +- nGQL statements can be used to display the HTTP port of NebulaGraph services and the HTTP2 port has been disabled. [#3808](https://github.com/vesoft-inc/nebula/pull/3808) - The number of sessions for connections to each graphd with the same client IP and the same user is limited. [#3729](https://github.com/vesoft-inc/nebula/pull/3729) - Optimized the waiting mechanism to ensure a timely connection to the metad after the storaged starts. [#3971](https://github.com/vesoft-inc/nebula/pull/3971) - When a node has multiple paths and an error of the disk corresponding to a particular path occurs, it is no longer to rebuild the node. [#4131](https://github.com/vesoft-inc/nebula/pull/4131) - Optimized the job manager. [#3976](https://github.com/vesoft-inc/nebula/pull/3976) [#4045](https://github.com/vesoft-inc/nebula/pull/4045) [#4001](https://github.com/vesoft-inc/nebula/pull/4001) - The `DOWNLOAD` and `INGEST` SST files are now managed with the job manager. [#3994](https://github.com/vesoft-inc/nebula/pull/3994) - Support for error code display when a job fails. [#4067](https://github.com/vesoft-inc/nebula/pull/4067) -- The OS page cache can be disabled and the block cache and Nebula Graph storage cache can only be used in a shared environment, to avoid memory usage interference between applications. [#3890](https://github.com/vesoft-inc/nebula/pull/3890) +- The OS page cache can be disabled and the block cache and NebulaGraph storage cache can only be used in a shared environment, to avoid memory usage interference between applications. [#3890](https://github.com/vesoft-inc/nebula/pull/3890) - Updated the default value of the KV separation threshold from 0 to 100. [#3879](https://github.com/vesoft-inc/nebula/pull/3879) - Support for using gflag to set the upper limit of expression depth for a better fit of different machine environments. [#3722](https://github.com/vesoft-inc/nebula/pull/3722) - Added a permission check for `KILL QUERY`. When the authorization is enabled, the GOD user can kill any query and the users with other roles can only kill queries that they own. [#3896](https://github.com/vesoft-inc/nebula/pull/3896) diff --git a/docs-2.0/20.appendix/write-tools.md b/docs-2.0/20.appendix/write-tools.md index d1fc45ddbaa..884821900fb 100644 --- a/docs-2.0/20.appendix/write-tools.md +++ b/docs-2.0/20.appendix/write-tools.md @@ -1,6 +1,6 @@ # Import tools -There are many ways to write Nebula Graph {{ nebula.release }}: +There are many ways to write NebulaGraph {{ nebula.release }}: - Import with [the command -f](../2.quick-start/3.connect-to-nebula-graph.md): This method imports a small number of prepared nGQL files, which is suitable to prepare for a small amount of manual test data. - Import with [Studio](../nebula-studio/quick-start/st-ug-import-data.md): This method uses a browser to import multiple csv files of this machine. A single file cannot exceed 100 MB, and its format is limited. diff --git a/docs-2.0/3.ngql-guide/1.nGQL-overview/1.overview.md b/docs-2.0/3.ngql-guide/1.nGQL-overview/1.overview.md index 4f8d63cb7e0..dc22ec8b638 100644 --- a/docs-2.0/3.ngql-guide/1.nGQL-overview/1.overview.md +++ b/docs-2.0/3.ngql-guide/1.nGQL-overview/1.overview.md @@ -1,12 +1,12 @@ -# Nebula Graph Query Language (nGQL) +# NebulaGraph Query Language (nGQL) -This topic gives an introduction to the query language of Nebula Graph, nGQL. +This topic gives an introduction to the query language of NebulaGraph, nGQL. ## What is nGQL -nGQL is a declarative graph query language for Nebula Graph. It allows expressive and efficient [graph patterns](3.graph-patterns.md). nGQL is designed for both developers and operations professionals. nGQL is an SQL-like query language, so it's easy to learn. +nGQL is a declarative graph query language for NebulaGraph. It allows expressive and efficient [graph patterns](3.graph-patterns.md). nGQL is designed for both developers and operations professionals. nGQL is an SQL-like query language, so it's easy to learn. -nGQL is a project in progress. New features and optimizations are done steadily. There can be differences between syntax and implementation. Submit an [issue](https://github.com/vesoft-inc/nebula-graph/issues) to inform the Nebula Graph team if you find a new issue of this type. Nebula Graph 3.0 or later releases will support [openCypher 9](https://www.opencypher.org/resources). +nGQL is a project in progress. New features and optimizations are done steadily. There can be differences between syntax and implementation. Submit an [issue](https://github.com/vesoft-inc/nebula-graph/issues) to inform the NebulaGraph team if you find a new issue of this type. NebulaGraph 3.0 or later releases will support [openCypher 9](https://www.opencypher.org/resources). ## What can nGQL do @@ -21,11 +21,11 @@ nGQL is a project in progress. New features and optimizations are done steadily. ## Example data Basketballplayer -Users can download the example data [Basketballplayer](https://docs.nebula-graph.io/2.0/basketballplayer-2.X.ngql) in Nebula Graph. After downloading the example data, you can import it to Nebula Graph by using the `-f` option in [Nebula Graph Console](../../2.quick-start/3.connect-to-nebula-graph.md). +Users can download the example data [Basketballplayer](https://docs.nebula-graph.io/2.0/basketballplayer-2.X.ngql) in NebulaGraph. After downloading the example data, you can import it to NebulaGraph by using the `-f` option in [NebulaGraph Console](../../2.quick-start/3.connect-to-nebula-graph.md). !!! note - Ensure that you have executed the `ADD HOSTS` command to add the Storage service to your Nebula Graph cluster before importing the example data. For more information, see [Manage Storage hosts](../../4.deployment-and-installation/manage-storage-host.md). + Ensure that you have executed the `ADD HOSTS` command to add the Storage service to your NebulaGraph cluster before importing the example data. For more information, see [Manage Storage hosts](../../4.deployment-and-installation/manage-storage-host.md). ## Placeholder identifiers and values @@ -73,7 +73,7 @@ nebula> CREATE TAG IF NOT EXISTS player(name string, age int); ### Native nGQL and openCypher -Native nGQL is the part of a graph query language designed and implemented by Nebula Graph. OpenCypher is a graph query language maintained by openCypher Implementers Group. +Native nGQL is the part of a graph query language designed and implemented by NebulaGraph. OpenCypher is a graph query language maintained by openCypher Implementers Group. The latest release is openCypher 9. The compatible parts of openCypher in nGQL are called openCypher compatible sentences (short as openCypher). @@ -99,7 +99,7 @@ NO. Users can search in this manual with the keyword `compatibility` to find major compatibility issues. - Multiple known incompatible items are listed in [Nebula Graph Issues](https://github.com/vesoft-inc/nebula-graph/issues?q=is%3Aissue+is%3Aopen+label%3Aincompatible). Submit an issue with the `incompatible` tag if you find a new issue of this type. + Multiple known incompatible items are listed in [NebulaGraph Issues](https://github.com/vesoft-inc/nebula-graph/issues?q=is%3Aissue+is%3Aopen+label%3Aincompatible). Submit an issue with the `incompatible` tag if you find a new issue of this type. ### What are the major differences between nGQL and openCypher 9? @@ -127,7 +127,7 @@ The following are some major differences (by design incompatible) between nGQL a ### Where can I find more nGQL examples? -Users can find more than 2500 nGQL examples in the [features](https://github.com/vesoft-inc/nebula/tree/master/tests/tck/features) directory on the Nebula Graph GitHub page. +Users can find more than 2500 nGQL examples in the [features](https://github.com/vesoft-inc/nebula/tree/master/tests/tck/features) directory on the NebulaGraph GitHub page. The `features` directory consists of `.feature` files. Each file records scenarios that you can use as nGQL examples. Here is an example: @@ -192,7 +192,7 @@ The keywords in the preceding example are described as follows. |`Given`|Describes the prerequisites of running the test statements in the current `.feature` file.| |`Scenario`|Describes the scenarios. If there is the `@skip` before one `Scenario`, this scenario may not work and do not use it as a working example in a production environment.| |`When`|Describes the nGQL statement to be executed. It can be a `executing query` or `profiling query`.| -|`Then`|Describes the expected return results of running the statement in the `When` clause. If the return results in your environment do not match the results described in the `.feature` file, submit an [issue](https://github.com/vesoft-inc/nebula-graph/issues) to inform the Nebula Graph team.| +|`Then`|Describes the expected return results of running the statement in the `When` clause. If the return results in your environment do not match the results described in the `.feature` file, submit an [issue](https://github.com/vesoft-inc/nebula-graph/issues) to inform the NebulaGraph team.| |`And`|Describes the side effects of running the statement in the `When` clause.| | `@skip` | This test case will be skipped. Commonly, the to-be-tested code is not ready.| @@ -202,10 +202,10 @@ Welcome to [add more tck case](https://github.com/vesoft-inc/nebula-graph/tree/m No. And no plan to support that. -### Does Nebula Graph support W3C RDF (SPARQL) or GraphQL? +### Does NebulaGraph support W3C RDF (SPARQL) or GraphQL? No. And no plan to support that. -The data model of Nebula Graph is the property graph. And as a strong schema system, Nebula Graph does not support RDF. +The data model of NebulaGraph is the property graph. And as a strong schema system, NebulaGraph does not support RDF. -Nebula Graph Query Language does not support `SPARQL` nor `GraphQL`. +NebulaGraph Query Language does not support `SPARQL` nor `GraphQL`. diff --git a/docs-2.0/3.ngql-guide/1.nGQL-overview/3.graph-patterns.md b/docs-2.0/3.ngql-guide/1.nGQL-overview/3.graph-patterns.md index 39cea1de5db..8ac3cb19daf 100644 --- a/docs-2.0/3.ngql-guide/1.nGQL-overview/3.graph-patterns.md +++ b/docs-2.0/3.ngql-guide/1.nGQL-overview/3.graph-patterns.md @@ -1,6 +1,6 @@ # Patterns -Patterns and graph pattern matching are the very heart of a graph query language. This topic will describe the patterns in Nebula Graph, some of which have not yet been implemented. +Patterns and graph pattern matching are the very heart of a graph query language. This topic will describe the patterns in NebulaGraph, some of which have not yet been implemented. ## Patterns for vertices diff --git a/docs-2.0/3.ngql-guide/1.nGQL-overview/comments.md b/docs-2.0/3.ngql-guide/1.nGQL-overview/comments.md index 580303b76a2..42966e14653 100644 --- a/docs-2.0/3.ngql-guide/1.nGQL-overview/comments.md +++ b/docs-2.0/3.ngql-guide/1.nGQL-overview/comments.md @@ -4,8 +4,8 @@ This topic will describe the comments in nGQL. !!! compatibility "Legacy version compatibility" - * In Nebula Graph 1.x, there are four comment styles: `#`, `--`, `//`, `/* */`. - * Since Nebula Graph 2.x, `--` cannot be used as comments. + * In NebulaGraph 1.x, there are four comment styles: `#`, `--`, `//`, `/* */`. + * Since NebulaGraph 2.x, `--` cannot be used as comments. ## Examples diff --git a/docs-2.0/3.ngql-guide/1.nGQL-overview/ngql-style-guide.md b/docs-2.0/3.ngql-guide/1.nGQL-overview/ngql-style-guide.md index cc352f95ebf..892a9cd6900 100644 --- a/docs-2.0/3.ngql-guide/1.nGQL-overview/ngql-style-guide.md +++ b/docs-2.0/3.ngql-guide/1.nGQL-overview/ngql-style-guide.md @@ -190,7 +190,7 @@ The strings should be surrounded by double quotes. When single or double quotes need to be nested in a string, use a backslash (\) to escape. For example: ```ngql - RETURN "\"Nebula Graph is amazing,\" the user says."; + RETURN "\"NebulaGraph is amazing,\" the user says."; ``` diff --git a/docs-2.0/3.ngql-guide/10.tag-statements/1.create-tag.md b/docs-2.0/3.ngql-guide/10.tag-statements/1.create-tag.md index d5ae868166e..c5e8398d279 100644 --- a/docs-2.0/3.ngql-guide/10.tag-statements/1.create-tag.md +++ b/docs-2.0/3.ngql-guide/10.tag-statements/1.create-tag.md @@ -11,7 +11,7 @@ Tags in nGQL are similar to labels in openCypher. But they are also quite differ ## Prerequisites -Running the `CREATE TAG` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, Nebula Graph throws an error. +Running the `CREATE TAG` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, NebulaGraph throws an error. ## Syntax @@ -35,7 +35,7 @@ CREATE TAG [IF NOT EXISTS] |``|The name of the property. It must be unique for each tag. The rules for permitted property names are the same as those for tag names.| |``|Shows the data type of each property. For a full description of the property data types, see [Data types](../3.data-types/1.numeric.md) and [Boolean](../3.data-types/2.boolean.md).| |`NULL \| NOT NULL`|Specifies if the property supports `NULL | NOT NULL`. The default value is `NULL`.| -|`DEFAULT`|Specifies a default value for a property. The default value can be a literal value or an expression supported by Nebula Graph. If no value is specified, the default value is used when inserting a new vertex.| +|`DEFAULT`|Specifies a default value for a property. The default value can be a literal value or an expression supported by NebulaGraph. If no value is specified, the default value is used when inserting a new vertex.| |`COMMENT`|The remarks of a certain property or the tag itself. The maximum length is 256 bytes. By default, there will be no comments on a tag.| |`TTL_DURATION`|Specifies the life cycle for the property. The property that exceeds the specified TTL expires. The expiration threshold is the `TTL_COL` value plus the `TTL_DURATION`. The default value of `TTL_DURATION` is `0`. It means the data never expires.| |`TTL_COL`|Specifies the property to set a timeout on. The data type of the property must be `int` or `timestamp`. A tag can only specify one field as `TTL_COL`. For more information on TTL, see [TTL options](../8.clauses-and-options/ttl-options.md).| diff --git a/docs-2.0/3.ngql-guide/10.tag-statements/2.drop-tag.md b/docs-2.0/3.ngql-guide/10.tag-statements/2.drop-tag.md index d3723d412fb..99a489f980f 100644 --- a/docs-2.0/3.ngql-guide/10.tag-statements/2.drop-tag.md +++ b/docs-2.0/3.ngql-guide/10.tag-statements/2.drop-tag.md @@ -12,7 +12,7 @@ This operation only deletes the Schema data. All the files or directories in the ## Prerequisites -- Running the `DROP TAG` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, Nebula Graph throws an error. +- Running the `DROP TAG` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, NebulaGraph throws an error. - Before you drop a tag, make sure that the tag does not have any indexes. Otherwise, the conflict error (`[ERROR (-1005)]: Conflict!`) will be returned when you run the `DROP TAG` statement. To drop an index, see [DROP INDEX](../14.native-index-statements/6.drop-native-index.md). diff --git a/docs-2.0/3.ngql-guide/10.tag-statements/3.alter-tag.md b/docs-2.0/3.ngql-guide/10.tag-statements/3.alter-tag.md index 90bea69b492..7a1e62f186c 100644 --- a/docs-2.0/3.ngql-guide/10.tag-statements/3.alter-tag.md +++ b/docs-2.0/3.ngql-guide/10.tag-statements/3.alter-tag.md @@ -4,7 +4,7 @@ ## Prerequisites -- Running the `ALTER TAG` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, Nebula Graph throws an error. +- Running the `ALTER TAG` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, NebulaGraph throws an error. - Before you alter properties for a tag, make sure that the properties are not indexed. If the properties contain any indexes, the conflict error `[ERROR (-1005)]: Conflict!` will occur when you `ALTER TAG`. For more information on dropping an index, see [DROP INDEX](../14.native-index-statements/6.drop-native-index.md). diff --git a/docs-2.0/3.ngql-guide/10.tag-statements/5.describe-tag.md b/docs-2.0/3.ngql-guide/10.tag-statements/5.describe-tag.md index 05a36bb5164..781431dbfd6 100644 --- a/docs-2.0/3.ngql-guide/10.tag-statements/5.describe-tag.md +++ b/docs-2.0/3.ngql-guide/10.tag-statements/5.describe-tag.md @@ -4,7 +4,7 @@ ## Prerequisite -Running the `DESCRIBE TAG` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, Nebula Graph throws an error. +Running the `DESCRIBE TAG` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, NebulaGraph throws an error. ## Syntax diff --git a/docs-2.0/3.ngql-guide/10.tag-statements/6.delete-tag.md b/docs-2.0/3.ngql-guide/10.tag-statements/6.delete-tag.md index 335342c56df..335bdcb6c33 100644 --- a/docs-2.0/3.ngql-guide/10.tag-statements/6.delete-tag.md +++ b/docs-2.0/3.ngql-guide/10.tag-statements/6.delete-tag.md @@ -4,7 +4,7 @@ ## Prerequisites -Running the `DELETE TAG` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, Nebula Graph throws an error. +Running the `DELETE TAG` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, NebulaGraph throws an error. ## Syntax diff --git a/docs-2.0/3.ngql-guide/10.tag-statements/improve-query-by-tag-index.md b/docs-2.0/3.ngql-guide/10.tag-statements/improve-query-by-tag-index.md index f093b55fd66..88c3f3b70f1 100644 --- a/docs-2.0/3.ngql-guide/10.tag-statements/improve-query-by-tag-index.md +++ b/docs-2.0/3.ngql-guide/10.tag-statements/improve-query-by-tag-index.md @@ -2,7 +2,7 @@ OpenCypher has the features of `SET label` and `REMOVE label` to speed up the process of querying or labeling. -Nebula Graph achieves the same operations by creating and inserting tags to an existing vertex, which can quickly query vertices based on the tag name. Users can also run `DELETE TAG` to delete some vertices that are no longer needed. +NebulaGraph achieves the same operations by creating and inserting tags to an existing vertex, which can quickly query vertices based on the tag name. Users can also run `DELETE TAG` to delete some vertices that are no longer needed. ## Examples diff --git a/docs-2.0/3.ngql-guide/11.edge-type-statements/1.create-edge.md b/docs-2.0/3.ngql-guide/11.edge-type-statements/1.create-edge.md index 7f40545c3fc..4e53f57593a 100644 --- a/docs-2.0/3.ngql-guide/11.edge-type-statements/1.create-edge.md +++ b/docs-2.0/3.ngql-guide/11.edge-type-statements/1.create-edge.md @@ -11,7 +11,7 @@ Edge types in nGQL are similar to relationship types in openCypher. But they are ## Prerequisites -Running the `CREATE EDGE` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, Nebula Graph throws an error. +Running the `CREATE EDGE` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, NebulaGraph throws an error. ## Syntax @@ -35,7 +35,7 @@ CREATE EDGE [IF NOT EXISTS] |``|The name of the property. It must be unique for each edge type. The rules for permitted property names are the same as those for edge type names.| |``|Shows the data type of each property. For a full description of the property data types, see [Data types](../3.data-types/1.numeric.md) and [Boolean](../3.data-types/2.boolean.md).| |`NULL \| NOT NULL`|Specifies if the property supports `NULL | NOT NULL`. The default value is `NULL`.| -|`DEFAULT`|Specifies a default value for a property. The default value can be a literal value or an expression supported by Nebula Graph. If no value is specified, the default value is used when inserting a new edge.| +|`DEFAULT`|Specifies a default value for a property. The default value can be a literal value or an expression supported by NebulaGraph. If no value is specified, the default value is used when inserting a new edge.| |`COMMENT`|The remarks of a certain property or the edge type itself. The maximum length is 256 bytes. By default, there will be no comments on an edge type.| |`TTL_DURATION`|Specifies the life cycle for the property. The property that exceeds the specified TTL expires. The expiration threshold is the `TTL_COL` value plus the `TTL_DURATION`. The default value of `TTL_DURATION` is `0`. It means the data never expires.| |`TTL_COL`|Specifies the property to set a timeout on. The data type of the property must be `int` or `timestamp`. An edge type can only specify one field as `TTL_COL`. For more information on TTL, see [TTL options](../8.clauses-and-options/ttl-options.md).| diff --git a/docs-2.0/3.ngql-guide/11.edge-type-statements/2.drop-edge.md b/docs-2.0/3.ngql-guide/11.edge-type-statements/2.drop-edge.md index 5891407c097..bc3b1b40baf 100644 --- a/docs-2.0/3.ngql-guide/11.edge-type-statements/2.drop-edge.md +++ b/docs-2.0/3.ngql-guide/11.edge-type-statements/2.drop-edge.md @@ -8,7 +8,7 @@ This operation only deletes the Schema data. All the files or directories in the ## Prerequisites -- Running the `DROP EDGE` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, Nebula Graph throws an error. +- Running the `DROP EDGE` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, NebulaGraph throws an error. - Before you drop an edge type, make sure that the edge type does not have any indexes. Otherwise, the conflict error (`[ERROR (-1005)]: Conflict!`) will be returned. To drop an index, see [DROP INDEX](../14.native-index-statements/6.drop-native-index.md). diff --git a/docs-2.0/3.ngql-guide/11.edge-type-statements/3.alter-edge.md b/docs-2.0/3.ngql-guide/11.edge-type-statements/3.alter-edge.md index 1cf8faa6cde..505eb07ccc3 100644 --- a/docs-2.0/3.ngql-guide/11.edge-type-statements/3.alter-edge.md +++ b/docs-2.0/3.ngql-guide/11.edge-type-statements/3.alter-edge.md @@ -4,7 +4,7 @@ ## Prerequisites -- Running the `ALTER EDGE` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, Nebula Graph throws an error. +- Running the `ALTER EDGE` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, NebulaGraph throws an error. - Before you alter properties for an edge type, make sure that the properties are not indexed. If the properties contain any indexes, the conflict error `[ERROR (-1005)]: Conflict!` will occur when you `ALTER EDGE`. For more information on dropping an index, see [DROP INDEX](../14.native-index-statements/6.drop-native-index.md). diff --git a/docs-2.0/3.ngql-guide/11.edge-type-statements/5.describe-edge.md b/docs-2.0/3.ngql-guide/11.edge-type-statements/5.describe-edge.md index 81c8b65341f..6b3f96afe9b 100644 --- a/docs-2.0/3.ngql-guide/11.edge-type-statements/5.describe-edge.md +++ b/docs-2.0/3.ngql-guide/11.edge-type-statements/5.describe-edge.md @@ -4,7 +4,7 @@ ## Prerequisites -Running the `DESCRIBE EDGE` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, Nebula Graph throws an error. +Running the `DESCRIBE EDGE` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, NebulaGraph throws an error. ## Syntax diff --git a/docs-2.0/3.ngql-guide/12.vertex-statements/1.insert-vertex.md b/docs-2.0/3.ngql-guide/12.vertex-statements/1.insert-vertex.md index bf6caf718ee..9874da449aa 100644 --- a/docs-2.0/3.ngql-guide/12.vertex-statements/1.insert-vertex.md +++ b/docs-2.0/3.ngql-guide/12.vertex-statements/1.insert-vertex.md @@ -1,10 +1,10 @@ # INSERT VERTEX -The `INSERT VERTEX` statement inserts one or more vertices into a graph space in Nebula Graph. +The `INSERT VERTEX` statement inserts one or more vertices into a graph space in NebulaGraph. ## Prerequisites -Running the `INSERT VERTEX` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, Nebula Graph throws an error. +Running the `INSERT VERTEX` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, NebulaGraph throws an error. ## Syntax @@ -55,11 +55,11 @@ prop_value_list: !!! caution - Nebula Graph {{ nebula.release }} supports inserting vertices without tags. + NebulaGraph {{ nebula.release }} supports inserting vertices without tags. * `prop_name_list` contains the names of the properties on the tag. -* `VID` is the vertex ID. In Nebula Graph 2.0, string and integer VID types are supported. The VID type is set when a graph space is created. For more information, see [CREATE SPACE](../9.space-statements/1.create-space.md). +* `VID` is the vertex ID. In NebulaGraph 2.0, string and integer VID types are supported. The VID type is set when a graph space is created. For more information, see [CREATE SPACE](../9.space-statements/1.create-space.md). * `prop_value_list` must provide the property values according to the `prop_name_list`. When the `NOT NULL` constraint is set for a given property, an error is returned if no property is given. When the default value for a property is `NULL`, you can omit to specify the property value. For details, see [CREATE TAG](../10.tag-statements/1.create-tag.md). diff --git a/docs-2.0/3.ngql-guide/12.vertex-statements/2.update-vertex.md b/docs-2.0/3.ngql-guide/12.vertex-statements/2.update-vertex.md index ef75ee3a3c5..2087c1d5caa 100644 --- a/docs-2.0/3.ngql-guide/12.vertex-statements/2.update-vertex.md +++ b/docs-2.0/3.ngql-guide/12.vertex-statements/2.update-vertex.md @@ -2,7 +2,7 @@ The `UPDATE VERTEX` statement updates properties on tags of a vertex. -In Nebula Graph, `UPDATE VERTEX` supports compare-and-set (CAS). +In NebulaGraph, `UPDATE VERTEX` supports compare-and-set (CAS). !!! note diff --git a/docs-2.0/3.ngql-guide/12.vertex-statements/4.delete-vertex.md b/docs-2.0/3.ngql-guide/12.vertex-statements/4.delete-vertex.md index cb412834d81..3112b3c1f5a 100644 --- a/docs-2.0/3.ngql-guide/12.vertex-statements/4.delete-vertex.md +++ b/docs-2.0/3.ngql-guide/12.vertex-statements/4.delete-vertex.md @@ -4,9 +4,9 @@ By default, the `DELETE VERTEX` statement deletes vertices but the incoming and !!! compatibility - - Nebula Graph 2.x deletes vertices and their incoming and outgoing edges. + - NebulaGraph 2.x deletes vertices and their incoming and outgoing edges. - - Nebula Graph {{nebula.release}} only deletes the vertices, and does not delete the related outgoing and incoming edges of the vertices. At this time, there will be dangling edges by default. + - NebulaGraph {{nebula.release}} only deletes the vertices, and does not delete the related outgoing and incoming edges of the vertices. At this time, there will be dangling edges by default. The `DELETE VERTEX` statement deletes one vertex or multiple vertices at a time. You can use `DELETE VERTEX` together with pipes. For more information about pipe, see [Pipe operator](../5.operators/4.pipe.md). @@ -44,7 +44,7 @@ nebula> GO FROM "player100" OVER serve WHERE properties(edge).start_year == "202 ## Process of deleting vertices -Once Nebula Graph deletes the vertices, all edges (incoming and outgoing edges) of the target vertex will become dangling edges. When Nebula Graph deletes the vertices `WITH EDGE`, Nebula Graph traverses the incoming and outgoing edges related to the vertices and deletes them all. Then Nebula Graph deletes the vertices. +Once NebulaGraph deletes the vertices, all edges (incoming and outgoing edges) of the target vertex will become dangling edges. When NebulaGraph deletes the vertices `WITH EDGE`, NebulaGraph traverses the incoming and outgoing edges related to the vertices and deletes them all. Then NebulaGraph deletes the vertices. !!! caution diff --git a/docs-2.0/3.ngql-guide/13.edge-statements/1.insert-edge.md b/docs-2.0/3.ngql-guide/13.edge-statements/1.insert-edge.md index 02ae4c9922b..cee24469ea0 100644 --- a/docs-2.0/3.ngql-guide/13.edge-statements/1.insert-edge.md +++ b/docs-2.0/3.ngql-guide/13.edge-statements/1.insert-edge.md @@ -1,6 +1,6 @@ # INSERT EDGE -The `INSERT EDGE` statement inserts an edge or multiple edges into a graph space from a source vertex (given by src_vid) to a destination vertex (given by dst_vid) with a specific rank in Nebula Graph. +The `INSERT EDGE` statement inserts an edge or multiple edges into a graph space from a source vertex (given by src_vid) to a destination vertex (given by dst_vid) with a specific rank in NebulaGraph. When inserting an edge that already exists, `INSERT VERTEX` **overrides** the edge. @@ -96,7 +96,7 @@ nebula> FETCH PROP ON e2 "14"->"15"@1 YIELD edge AS e; !!! Note - * Nebula Graph {{ nebula.release }} allows dangling edges. Therefore, you can write the edge before the source vertex or the destination vertex exists. At this time, you can get the (not written) vertex VID through `._src` or `._dst` (which is not recommended). + * NebulaGraph {{ nebula.release }} allows dangling edges. Therefore, you can write the edge before the source vertex or the destination vertex exists. At this time, you can get the (not written) vertex VID through `._src` or `._dst` (which is not recommended). * Atomic operation is not guaranteed during the entire process for now. If it fails, please try again. Otherwise, partial writing will occur. At this time, the behavior of reading the data is undefined. * Concurrently writing the same edge will cause an `edge conflict` error, so please try again later. * The inserting speed of an edge is about half that of a vertex. Because in the storaged process, the insertion of an edge involves two tasks, while the insertion of a vertex involves only one task. diff --git a/docs-2.0/3.ngql-guide/13.edge-statements/2.update-edge.md b/docs-2.0/3.ngql-guide/13.edge-statements/2.update-edge.md index ac74d20b785..3f573514106 100644 --- a/docs-2.0/3.ngql-guide/13.edge-statements/2.update-edge.md +++ b/docs-2.0/3.ngql-guide/13.edge-statements/2.update-edge.md @@ -2,7 +2,7 @@ The `UPDATE EDGE` statement updates properties on an edge. -In Nebula Graph, `UPDATE EDGE` supports compare-and-swap (CAS). +In NebulaGraph, `UPDATE EDGE` supports compare-and-swap (CAS). ## Syntax diff --git a/docs-2.0/3.ngql-guide/13.edge-statements/4.delete-edge.md b/docs-2.0/3.ngql-guide/13.edge-statements/4.delete-edge.md index 9a94a654c00..20a872c0041 100644 --- a/docs-2.0/3.ngql-guide/13.edge-statements/4.delete-edge.md +++ b/docs-2.0/3.ngql-guide/13.edge-statements/4.delete-edge.md @@ -12,7 +12,7 @@ DELETE EDGE -> [@] [, -> CREATE TAG INDEX IF NOT EXISTS player_index_1 on player(name(10), age); !!! note - Nebula Graph follows the left matching principle to select indexes. + NebulaGraph follows the left matching principle to select indexes. (v2) \ !!! compatibility "OpenCypher compatibility" - In Nebula Graph versions earlier than 3.0.0, the prerequisite for matching a edge is that the edge itself has an index or a certain property of the edge has an index. As of version 3.0.0, there is no need to create an index for matching a edge, but you need to use `LIMIT` to limit the number of output results and you must specify the direction of the edge. + In NebulaGraph versions earlier than 3.0.0, the prerequisite for matching a edge is that the edge itself has an index or a certain property of the edge has an index. As of version 3.0.0, there is no need to create an index for matching a edge, but you need to use `LIMIT` to limit the number of output results and you must specify the direction of the edge. ```ngql nebula> MATCH ()<-[e]-() \ @@ -351,7 +351,7 @@ Just like vertices, you can specify edge types with `:` in a pattern. !!! compatibility "OpenCypher compatibility" - In Nebula Graph versions earlier than 3.0.0, the prerequisite for matching a edge type is that the edge type itself has an index or a certain property of the edge type has an index. As of version 3.0.0, there is no need to create an index for matching a edge type, but you need to use `LIMIT` to limit the number of output results and you must specify the direction of the edge. + In NebulaGraph versions earlier than 3.0.0, the prerequisite for matching a edge type is that the edge type itself has an index or a certain property of the edge type has an index. As of version 3.0.0, there is no need to create an index for matching a edge type, but you need to use `LIMIT` to limit the number of output results and you must specify the direction of the edge. ```ngql nebula> MATCH ()-[e:follow]->() \ @@ -620,4 +620,4 @@ See [OPTIONAL MATCH](optional-match.md)。 !!! Performance - In Nebula Graph, the performance and resource usage of the `MATCH` statement have been optimized. But we still recommend to use `GO`, `LOOKUP`, `|`, and `FETCH` instead of `MATCH` when high performance is required. + In NebulaGraph, the performance and resource usage of the `MATCH` statement have been optimized. But we still recommend to use `GO`, `LOOKUP`, `|`, and `FETCH` instead of `MATCH` when high performance is required. diff --git a/docs-2.0/3.ngql-guide/7.general-query-statements/3.go.md b/docs-2.0/3.ngql-guide/7.general-query-statements/3.go.md index 01657d6a9bd..5227625bd0f 100644 --- a/docs-2.0/3.ngql-guide/7.general-query-statements/3.go.md +++ b/docs-2.0/3.ngql-guide/7.general-query-statements/3.go.md @@ -29,7 +29,7 @@ YIELD [DISTINCT] [AS ] [, [AS ] ...] ``` -- ` STEPS`: specifies the hop number. If not specified, the default value for `N` is `one`. When `N` is `zero`, Nebula Graph does not traverse any edges and returns nothing. +- ` STEPS`: specifies the hop number. If not specified, the default value for `N` is `one`. When `N` is `zero`, NebulaGraph does not traverse any edges and returns nothing. !!! note diff --git a/docs-2.0/3.ngql-guide/7.general-query-statements/5.lookup.md b/docs-2.0/3.ngql-guide/7.general-query-statements/5.lookup.md index da915109bc8..da10e377572 100644 --- a/docs-2.0/3.ngql-guide/7.general-query-statements/5.lookup.md +++ b/docs-2.0/3.ngql-guide/7.general-query-statements/5.lookup.md @@ -20,13 +20,13 @@ This topic applies to native nGQL only. - Correct use of indexes can speed up queries, but indexes can dramatically reduce the write performance. The performance reduction can be 90% or even more. **DO NOT** use indexes in production environments unless you are fully aware of their influences on your service. -- If the specified property is not indexed when using the `LOOKUP` statement, Nebula Graph randomly selects one of the available indexes. +- If the specified property is not indexed when using the `LOOKUP` statement, NebulaGraph randomly selects one of the available indexes. - For example, the tag `player` has two properties, `name` and `age`. Both the tag `player` itself and the property `name` have indexes, but the property `age` has no indexes. When running `LOOKUP ON player WHERE player.age == 36 YIELD player.name;`, Nebula Graph randomly uses one of the indexes of the tag `player` and the property `name`. + For example, the tag `player` has two properties, `name` and `age`. Both the tag `player` itself and the property `name` have indexes, but the property `age` has no indexes. When running `LOOKUP ON player WHERE player.age == 36 YIELD player.name;`, NebulaGraph randomly uses one of the indexes of the tag `player` and the property `name`. !!! compatibility "Legacy version compatibility" - Before the release 2.5.0, if the specified property is not indexed when using the `LOOKUP` statement, Nebula Graph reports an error and does not use other indexes. + Before the release 2.5.0, if the specified property is not indexed when using the `LOOKUP` statement, NebulaGraph reports an error and does not use other indexes. ## Prerequisites diff --git a/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/.3.show-configs.md b/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/.3.show-configs.md index e3a54ad4656..d0568a64a38 100644 --- a/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/.3.show-configs.md +++ b/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/.3.show-configs.md @@ -16,7 +16,7 @@ SHOW CONFIGS [GRAPH|META|STORAGE] |`META`|Shows the configuration of the Meta Service.| |`STORAGE`|Shows the configuration of the Meta Service.| -If no service name is set in the statement, Nebula Graph shows the mutable configurations of all services. +If no service name is set in the statement, NebulaGraph shows the mutable configurations of all services. ## Example @@ -42,10 +42,10 @@ The output of `SHOW CONFIGS` is explained as follows: |Column|Description| |-|-| -|`module`|The Nebula Graph service name.| +|`module`|The NebulaGraph service name.| |`name`|The parameter name.| |`type`|The data type of the value.| |`mode`|Shows whether the parameter can be modified or not.| |`value`|The value of the parameter.| -For more information about the Nebula Graph configurations, see [Configuration](../../../5.configurations-and-logs/1.configurations/1.configurations.md). +For more information about the NebulaGraph configurations, see [Configuration](../../../5.configurations-and-logs/1.configurations/1.configurations.md). diff --git a/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/1.show-charset.md b/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/1.show-charset.md index 6f79454e7c2..bce0409cdac 100644 --- a/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/1.show-charset.md +++ b/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/1.show-charset.md @@ -2,7 +2,7 @@ The `SHOW CHARSET` statement shows the available character sets. -Currently available types are `utf8` and `utf8mb4`. The default charset type is `utf8`. Nebula Graph extends the `uft8` to support four-byte characters. Therefore `utf8` and `utf8mb4` are equivalent. +Currently available types are `utf8` and `utf8mb4`. The default charset type is `utf8`. NebulaGraph extends the `uft8` to support four-byte characters. Therefore `utf8` and `utf8mb4` are equivalent. ## Syntax diff --git a/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/10.show-roles.md b/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/10.show-roles.md index 10c324ffdb8..68b52afbfb5 100644 --- a/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/10.show-roles.md +++ b/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/10.show-roles.md @@ -4,11 +4,11 @@ The `SHOW ROLES` statement shows the roles that are assigned to a user account. The return message differs according to the role of the user who is running this statement: -* If the user is a `GOD` or `ADMIN` and is granted access to the specified graph space, Nebula Graph shows all roles in this graph space except for `GOD`. +* If the user is a `GOD` or `ADMIN` and is granted access to the specified graph space, NebulaGraph shows all roles in this graph space except for `GOD`. -* If the user is a `DBA`, `USER`, or `GUEST` and is granted access to the specified graph space, Nebula Graph shows the user's own role in this graph space. +* If the user is a `DBA`, `USER`, or `GUEST` and is granted access to the specified graph space, NebulaGraph shows the user's own role in this graph space. -* If the user does not have access to the specified graph space, Nebula Graph returns `PermissionError`. +* If the user does not have access to the specified graph space, NebulaGraph returns `PermissionError`. For more information about roles, see [Roles and privileges](../../../7.data-security/1.authentication/3.role-list.md). diff --git a/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/12.show-spaces.md b/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/12.show-spaces.md index 38eff101f02..d420edb8091 100644 --- a/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/12.show-spaces.md +++ b/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/12.show-spaces.md @@ -1,6 +1,6 @@ # SHOW SPACES -The `SHOW SPACES` statement shows existing graph spaces in Nebula Graph. +The `SHOW SPACES` statement shows existing graph spaces in NebulaGraph. For how to create a graph space, see [CREATE SPACE](./../../9.space-statements/1.create-space.md). diff --git a/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/2.show-collation.md b/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/2.show-collation.md index 7517438b26a..a86b4a77fc4 100644 --- a/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/2.show-collation.md +++ b/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/2.show-collation.md @@ -1,6 +1,6 @@ # SHOW COLLATION -The `SHOW COLLATION` statement shows the collations supported by Nebula Graph. +The `SHOW COLLATION` statement shows the collations supported by NebulaGraph. Currently available types are: `utf8_bin`, `utf8_general_ci`, `utf8mb4_bin`, and `utf8mb4_general_ci`. diff --git a/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/6.show-hosts.md b/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/6.show-hosts.md index fdb014c0920..233fad58ecf 100644 --- a/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/6.show-hosts.md +++ b/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/6.show-hosts.md @@ -10,7 +10,7 @@ SHOW HOSTS [GRAPH | STORAGE | META]; !!! note - For a Nebula Graph cluster installed with the source code, the version of the cluster will not be displayed in the output after executing the command `SHOW HOSTS (GRAPH | STORAGE | META)` with the service name. + For a NebulaGraph cluster installed with the source code, the version of the cluster will not be displayed in the output after executing the command `SHOW HOSTS (GRAPH | STORAGE | META)` with the service name. ## Examples diff --git a/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/8.show-indexes.md b/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/8.show-indexes.md index e084bbee601..c5d7e06e5ad 100644 --- a/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/8.show-indexes.md +++ b/docs-2.0/3.ngql-guide/7.general-query-statements/6.show/8.show-indexes.md @@ -31,4 +31,4 @@ nebula> SHOW EDGE INDEXES; !!! Compatibility "Legacy version compatibility" - In Nebula Graph 2.x, `SHOW TAG/EDGE INDEXES` only returns `Names`. + In NebulaGraph 2.x, `SHOW TAG/EDGE INDEXES` only returns `Names`. diff --git a/docs-2.0/3.ngql-guide/8.clauses-and-options/limit.md b/docs-2.0/3.ngql-guide/8.clauses-and-options/limit.md index de86f1df7ae..6c259368159 100644 --- a/docs-2.0/3.ngql-guide/8.clauses-and-options/limit.md +++ b/docs-2.0/3.ngql-guide/8.clauses-and-options/limit.md @@ -191,5 +191,5 @@ nebula> MATCH (v:player{name:"Tim Duncan"}) --> (v2) \ diff --git a/docs-2.0/3.ngql-guide/8.clauses-and-options/ttl-options.md b/docs-2.0/3.ngql-guide/8.clauses-and-options/ttl-options.md index 06aed9f4cd4..a0316356cdb 100644 --- a/docs-2.0/3.ngql-guide/8.clauses-and-options/ttl-options.md +++ b/docs-2.0/3.ngql-guide/8.clauses-and-options/ttl-options.md @@ -34,7 +34,7 @@ Since an edge can have only one edge type, once an edge property expires, the ed The expired data are still stored on the disk, but queries will filter them out. -Nebula Graph automatically deletes the expired data and reclaims the disk space during the next [compaction](../../8.service-tuning/compaction.md). +NebulaGraph automatically deletes the expired data and reclaims the disk space during the next [compaction](../../8.service-tuning/compaction.md). !!! note diff --git a/docs-2.0/3.ngql-guide/9.space-statements/1.create-space.md b/docs-2.0/3.ngql-guide/9.space-statements/1.create-space.md index 12ad4e144b4..e8329e60c05 100644 --- a/docs-2.0/3.ngql-guide/9.space-statements/1.create-space.md +++ b/docs-2.0/3.ngql-guide/9.space-statements/1.create-space.md @@ -1,6 +1,6 @@ # CREATE SPACE -Graph spaces are used to store data in a physically isolated way in Nebula Graph, which is similar to the database concept in MySQL. The `CREATE SPACE` statement can create a new graph space or clone the schema of an existing graph space. +Graph spaces are used to store data in a physically isolated way in NebulaGraph, which is similar to the database concept in MySQL. The `CREATE SPACE` statement can create a new graph space or clone the schema of an existing graph space. ## Prerequisites @@ -22,29 +22,29 @@ CREATE SPACE [IF NOT EXISTS] ( |Parameter|Description| |:---|:---| |`IF NOT EXISTS`|Detects if the related graph space exists. If it does not exist, a new one will be created. The graph space existence detection here only compares the graph space name (excluding properties).| -|``|Uniquely identifies a graph space in a Nebula Graph instance. The name of the graph space starts with a letter, supports 1 to 4 bytes UTF-8 encoded characters, such as English letters (case-sensitive), digits, and Chinese characters, but does not support special characters except underscores. To use special characters or reserved keywords as identifiers, quote them with backticks. For more information, see [Keywords and reserved words](../../3.ngql-guide/1.nGQL-overview/keywords-and-reserved-words.md).| +|``|Uniquely identifies a graph space in a NebulaGraph instance. The name of the graph space starts with a letter, supports 1 to 4 bytes UTF-8 encoded characters, such as English letters (case-sensitive), digits, and Chinese characters, but does not support special characters except underscores. To use special characters or reserved keywords as identifiers, quote them with backticks. For more information, see [Keywords and reserved words](../../3.ngql-guide/1.nGQL-overview/keywords-and-reserved-words.md).| |`partition_num`|Specifies the number of partitions in each replica. The suggested value is 20 times (2 times for HDD) the number of the hard disks in the cluster. For example, if you have three hard disks in the cluster, we recommend that you set 60 partitions. The default value is 100.| |`replica_factor`|Specifies the number of replicas in the cluster. The suggested number is 3 in a production environment and 1 in a test environment. The replica number must be an **odd number** for the need of quorum-based voting. The default value is 1.| -|`vid_type`|A required parameter. Specifies the VID type in a graph space. Available values are `FIXED_STRING(N)` and `INT64`. `INT` equals to `INT64`. `FIXED_STRING()` specifies the VID as a string, while `INT64` specifies it as an integer. `N` represents the maximum length of the VIDs. If you set a VID that is longer than `N` characters, Nebula Graph throws an error.| +|`vid_type`|A required parameter. Specifies the VID type in a graph space. Available values are `FIXED_STRING(N)` and `INT64`. `INT` equals to `INT64`. `FIXED_STRING()` specifies the VID as a string, while `INT64` specifies it as an integer. `N` represents the maximum length of the VIDs. If you set a VID that is longer than `N` characters, NebulaGraph throws an error.| |`COMMENT`|The remarks of the graph space. The maximum length is 256 bytes. By default, there is no comments on a space.| !!! caution - - If the replica number is set to one, you will not be able to load balance or scale out the Nebula Graph Storage Service with the [BALANCE](../../8.service-tuning/load-balance.md) statement. + - If the replica number is set to one, you will not be able to load balance or scale out the NebulaGraph Storage Service with the [BALANCE](../../8.service-tuning/load-balance.md) statement. - Restrictions on VID type change and VID length: - - For Nebula Graph v1.x, the type of VIDs can only be `INT64`, and the String type is not allowed. For Nebula Graph v2.x, both `INT64` and `FIXED_STRING()` VID types are allowed. You must specify the VID type when creating a graph space, and use the same VID type in `INSERT` statements, otherwise, an error message `Wrong vertex id type: 1001` occurs. + - For NebulaGraph v1.x, the type of VIDs can only be `INT64`, and the String type is not allowed. For NebulaGraph v2.x, both `INT64` and `FIXED_STRING()` VID types are allowed. You must specify the VID type when creating a graph space, and use the same VID type in `INSERT` statements, otherwise, an error message `Wrong vertex id type: 1001` occurs. - - The length of the VID should not be longer than `N` characters. If it exceeds `N`, Nebula Graph throws `The VID must be a 64-bit integer or a string fitting space vertex id length limit.`. + - The length of the VID should not be longer than `N` characters. If it exceeds `N`, NebulaGraph throws `The VID must be a 64-bit integer or a string fitting space vertex id length limit.`. !!! compatibility "Legacy version compatibility" - For Nebula Graph v2.x before v2.5.0, `vid_type` is optional and defaults to `FIXED_STRING(8)`. + For NebulaGraph v2.x before v2.5.0, `vid_type` is optional and defaults to `FIXED_STRING(8)`. !!! note diff --git a/docs-2.0/3.ngql-guide/9.space-statements/2.use-space.md b/docs-2.0/3.ngql-guide/9.space-statements/2.use-space.md index 43b0f99526c..9ca11300d71 100644 --- a/docs-2.0/3.ngql-guide/9.space-statements/2.use-space.md +++ b/docs-2.0/3.ngql-guide/9.space-statements/2.use-space.md @@ -4,7 +4,7 @@ ## Prerequisites -Running the `USE` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, Nebula Graph throws an error. +Running the `USE` statement requires some [privileges](../../7.data-security/1.authentication/3.role-list.md) for the graph space. Otherwise, NebulaGraph throws an error. ## Syntax @@ -30,4 +30,4 @@ nebula> USE space2; You cannot use two graph spaces in one statement. - Different from Fabric Cypher, graph spaces in Nebula Graph are fully isolated from each other. Making a graph space as the working graph space prevents you from accessing other spaces. The only way to traverse in a new graph space is to switch by the `USE` statement. In Fabric Cypher, you can use two graph spaces in one statement (using the `USE + CALL` syntax). But in Nebula Graph, you can only use one graph space in one statement. + Different from Fabric Cypher, graph spaces in NebulaGraph are fully isolated from each other. Making a graph space as the working graph space prevents you from accessing other spaces. The only way to traverse in a new graph space is to switch by the `USE` statement. In Fabric Cypher, you can use two graph spaces in one statement (using the `USE + CALL` syntax). But in NebulaGraph, you can only use one graph space in one statement. diff --git a/docs-2.0/3.ngql-guide/9.space-statements/3.show-spaces.md b/docs-2.0/3.ngql-guide/9.space-statements/3.show-spaces.md index cb5beb6cdc4..7a0cbc1b90d 100644 --- a/docs-2.0/3.ngql-guide/9.space-statements/3.show-spaces.md +++ b/docs-2.0/3.ngql-guide/9.space-statements/3.show-spaces.md @@ -1,6 +1,6 @@ # SHOW SPACES -`SHOW SPACES` lists all the graph spaces in the Nebula Graph examples. +`SHOW SPACES` lists all the graph spaces in the NebulaGraph examples. ## Syntax diff --git a/docs-2.0/3.ngql-guide/9.space-statements/5.drop-space.md b/docs-2.0/3.ngql-guide/9.space-statements/5.drop-space.md index b36316262da..45c01285f5e 100644 --- a/docs-2.0/3.ngql-guide/9.space-statements/5.drop-space.md +++ b/docs-2.0/3.ngql-guide/9.space-statements/5.drop-space.md @@ -16,7 +16,7 @@ You can use the `IF EXISTS` keywords when dropping spaces. These keywords automa !!! compatibility "Legacy version compatibility" - In Nebula Graph versions earlier than 3.1.0, the `DROP SPACE` statement does not remove all the files and directories from the disk by default. + In NebulaGraph versions earlier than 3.1.0, the `DROP SPACE` statement does not remove all the files and directories from the disk by default. !!! caution diff --git a/docs-2.0/3.ngql-guide/9.space-statements/6.clear-space.md b/docs-2.0/3.ngql-guide/9.space-statements/6.clear-space.md index 20a49fb59c6..e39907e5893 100644 --- a/docs-2.0/3.ngql-guide/9.space-statements/6.clear-space.md +++ b/docs-2.0/3.ngql-guide/9.space-statements/6.clear-space.md @@ -15,8 +15,8 @@ Only the [God role](../../7.data-security/1.authentication/3.role-list.md) has t !!! enterpriseonly - - The Nebula Graph Community Edition does not support blocking data writing while allowing `CLEAR SPACE`. - - The Nebula Graph Enterprise Edition supports blocking data writing by setting `VARIABLE read_only=true` before running `CLEAR SPACE`. After the data are cleared successfully, run `SET VARIABLE read_only=false` to allow data writing again. + - The NebulaGraph Community Edition does not support blocking data writing while allowing `CLEAR SPACE`. + - The NebulaGraph Enterprise Edition supports blocking data writing by setting `VARIABLE read_only=true` before running `CLEAR SPACE`. After the data are cleared successfully, run `SET VARIABLE read_only=false` to allow data writing again. ## Syntax diff --git a/docs-2.0/4.deployment-and-installation/1.resource-preparations.md b/docs-2.0/4.deployment-and-installation/1.resource-preparations.md index ba5280f2641..abccb52acf9 100644 --- a/docs-2.0/4.deployment-and-installation/1.resource-preparations.md +++ b/docs-2.0/4.deployment-and-installation/1.resource-preparations.md @@ -1,14 +1,14 @@ -# Prepare resources for compiling, installing, and running Nebula Graph +# Prepare resources for compiling, installing, and running NebulaGraph -This topic describes the requirements and suggestions for compiling and installing Nebula Graph, as well as how to estimate the resource you need to reserve for running a Nebula Graph cluster. +This topic describes the requirements and suggestions for compiling and installing NebulaGraph, as well as how to estimate the resource you need to reserve for running a NebulaGraph cluster. !!! enterpriseonly - In addition to installing Nebula Graph with the source code, the Dashboard Enterprise Edition tool is a better and convenient choice for installing Community and Enterprise Edition Nebula Graph. For details, see [Deploy Dashboard](../nebula-dashboard-ent/2.deploy-connect-dashboard-ent.md). + In addition to installing NebulaGraph with the source code, the Dashboard Enterprise Edition tool is a better and convenient choice for installing Community and Enterprise Edition NebulaGraph. For details, see [Deploy Dashboard](../nebula-dashboard-ent/2.deploy-connect-dashboard-ent.md). ## About storage devices -Nebula Graph is designed and implemented for NVMe SSD. All default parameters are optimized for the SSD devices and require extremely high IOPS and low latency. +NebulaGraph is designed and implemented for NVMe SSD. All default parameters are optimized for the SSD devices and require extremely high IOPS and low latency. - Due to the poor IOPS capability and long random seek latency, HDD is not recommended. Users may encounter many problems when using HDD. @@ -22,11 +22,11 @@ Nebula Graph is designed and implemented for NVMe SSD. All default parameters ar !!! note - Starting with 3.0.2, you can run containerized Nebula Graph databases on Docker Desktop for ARM macOS or on ARM Linux servers. + Starting with 3.0.2, you can run containerized NebulaGraph databases on Docker Desktop for ARM macOS or on ARM Linux servers. ## Requirements for compiling the source code -### Hardware requirements for compiling Nebula Graph +### Hardware requirements for compiling NebulaGraph | Item | Requirement | | ---------------- | ----------- | @@ -34,17 +34,17 @@ Nebula Graph is designed and implemented for NVMe SSD. All default parameters ar | Memory | 4 GB | | Disk | 10 GB, SSD | -### Supported operating systems for compiling Nebula Graph +### Supported operating systems for compiling NebulaGraph -For now, we can only compile Nebula Graph in the Linux system. We recommend that you use any Linux system with kernel version `4.15` or above. +For now, we can only compile NebulaGraph in the Linux system. We recommend that you use any Linux system with kernel version `4.15` or above. !!! note - To install Nebula Graph on Linux systems with kernel version lower than required, use [RPM/DEB packages](2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) or [TAR files](2.compile-and-install-nebula-graph/4.install-nebula-graph-from-tar.md). + To install NebulaGraph on Linux systems with kernel version lower than required, use [RPM/DEB packages](2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) or [TAR files](2.compile-and-install-nebula-graph/4.install-nebula-graph-from-tar.md). -### Software requirements for compiling Nebula Graph +### Software requirements for compiling NebulaGraph -You must have the correct version of the software listed below to compile Nebula Graph. If they are not as required or you are not sure, follow the steps in [Prepare software for compiling Nebula Graph](#prepare_software_for_compiling_nebula_graph) to get them ready. +You must have the correct version of the software listed below to compile NebulaGraph. If they are not as required or you are not sure, follow the steps in [Prepare software for compiling NebulaGraph](#prepare_software_for_compiling_nebula_graph) to get them ready. | Software | Version | Note | | ---------------- | ---------------------- | --------------------------------------------------------- | @@ -68,7 +68,7 @@ You must have the correct version of the software listed below to compile Nebula Other third-party software will be automatically downloaded and installed to the `build` directory at the configure (cmake) stage. -### Prepare software for compiling Nebula Graph +### Prepare software for compiling NebulaGraph If part of the dependencies are missing or the versions does not meet the requirements, manually install them with the following steps. You can skip unnecessary dependencies or steps according to your needs. @@ -116,7 +116,7 @@ If part of the dependencies are missing or the versions does not meet the requir gettext ``` -2. Check if the GCC and cmake on your host are in the right version. See [Software requirements for compiling Nebula Graph](#software_requirements_for_compiling_nebula_graph) for the required versions. +2. Check if the GCC and cmake on your host are in the right version. See [Software requirements for compiling NebulaGraph](#software_requirements_for_compiling_nebula_graph) for the required versions. ```bash @@ -145,7 +145,7 @@ If part of the dependencies are missing or the versions does not meet the requir apt install gcc-11 g++-11 ``` -## Requirements and suggestions for installing Nebula Graph in test environments +## Requirements and suggestions for installing NebulaGraph in test environments ### Hardware requirements for test environments @@ -158,10 +158,10 @@ If part of the dependencies are missing or the versions does not meet the requir ### Supported operating systems for test environments -For now, we can only install Nebula Graph in the Linux system. To install Nebula Graph in a test environment, we recommend that you use any Linux system with kernel version `3.9` or above. +For now, we can only install NebulaGraph in the Linux system. To install NebulaGraph in a test environment, we recommend that you use any Linux system with kernel version `3.9` or above. ### Suggested service architecture for test environments @@ -174,7 +174,7 @@ You can adjust some of the kernel parameters to better accommodate the need for For example, for a single-machine test environment, you can deploy 1 metad, 1 storaged, and 1 graphd processes in the machine. -For a more common test environment, such as a cluster of 3 machines (named as A, B, and C), you can deploy Nebula Graph as follows: +For a more common test environment, such as a cluster of 3 machines (named as A, B, and C), you can deploy NebulaGraph as follows: | Machine name | Number of metad | Number of storaged | Number of graphd | | ------------ | --------------- | ------------------ | ---------------- | @@ -182,7 +182,7 @@ For a more common test environment, such as a cluster of 3 machines (named as A, | B | None | 1 | 1 | | C | None | 1 | 1 | -## Requirements and suggestions for installing Nebula Graph in production environments +## Requirements and suggestions for installing NebulaGraph in production environments ### Hardware requirements for production environments @@ -195,9 +195,9 @@ For a more common test environment, such as a cluster of 3 machines (named as A, ### Supported operating systems for production environments -For now, we can only install Nebula Graph in the Linux system. To install Nebula Graph in a production environment, we recommend that you use any Linux system with kernel version 3.9 or above. +For now, we can only install NebulaGraph in the Linux system. To install NebulaGraph in a production environment, we recommend that you use any Linux system with kernel version 3.9 or above. -Users can adjust some of the kernel parameters to better accommodate the need for running Nebula Graph. For more information, see [kernel configuration](../5.configurations-and-logs/1.configurations/6.kernel-config.md). +Users can adjust some of the kernel parameters to better accommodate the need for running NebulaGraph. For more information, see [kernel configuration](../5.configurations-and-logs/1.configurations/6.kernel-config.md). ### Suggested service architecture for production environments @@ -215,7 +215,7 @@ Each metad process automatically creates and maintains a replica of the metadata The number of storaged processes does not affect the number of graph space replicas. -Users can deploy multiple processes on a single machine. For example, on a cluster of 5 machines (named as A, B, C, D, and E), you can deploy Nebula Graph as follows: +Users can deploy multiple processes on a single machine. For example, on a cluster of 5 machines (named as A, B, C, D, and E), you can deploy NebulaGraph as follows: | Machine name | Number of metad | Number of storaged | Number of graphd | | ------------ | --------------- | ------------------ | ---------------- | @@ -225,9 +225,9 @@ Users can deploy multiple processes on a single machine. For example, on a clust | D | None | 1 | 1 | | E | None | 1 | 1 | -## Capacity requirements for running a Nebula Graph cluster +## Capacity requirements for running a NebulaGraph cluster -Users can estimate the memory, disk space, and partition number needed for a Nebula Graph cluster of 3 replicas as follows. +Users can estimate the memory, disk space, and partition number needed for a NebulaGraph cluster of 3 replicas as follows. | Resource |Unit| How to estimate |Description| |:--- |:---|:--- |:---| @@ -253,7 +253,7 @@ Users can estimate the memory, disk space, and partition number needed for a Neb [This part might be moved to the configuration doc map later.] -Nebula Graph is intended for NVMe SSD, but if you don't have a choice, optimizing the configuration as follows may better accommodate HDD. +NebulaGraph is intended for NVMe SSD, but if you don't have a choice, optimizing the configuration as follows may better accommodate HDD. * etc/nebula-storage.conf: diff --git a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md index b86539d9bb9..6c6316c4e6d 100644 --- a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md +++ b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md @@ -1,22 +1,22 @@ -# Install Nebula Graph by compiling the source code +# Install NebulaGraph by compiling the source code -Installing Nebula Graph from the source code allows you to customize the compiling and installation settings and test the latest features. +Installing NebulaGraph from the source code allows you to customize the compiling and installation settings and test the latest features. ## Prerequisites -- Users have to prepare correct resources described in [Prepare resources for compiling, installing, and running Nebula Graph](../1.resource-preparations.md). +- Users have to prepare correct resources described in [Prepare resources for compiling, installing, and running NebulaGraph](../1.resource-preparations.md). !!! note - Compilation of Nebula Graph offline is not currently supported. + Compilation of NebulaGraph offline is not currently supported. -- The host to be installed with Nebula Graph has access to the Internet. +- The host to be installed with NebulaGraph has access to the Internet. ## Installation steps -1. Use Git to clone the source code of Nebula Graph to the host. +1. Use Git to clone the source code of NebulaGraph to the host. - - [Recommended] To install Nebula Graph {{nebula.release}}, run the following command. + - [Recommended] To install NebulaGraph {{nebula.release}}, run the following command. ```bash $ git clone --branch {{nebula.branch}} https://github.com/vesoft-inc/nebula.git @@ -52,11 +52,11 @@ Installing Nebula Graph from the source code allows you to customize the compili $ cmake -DCMAKE_INSTALL_PREFIX=/usr/local/nebula -DENABLE_TESTING=OFF -DCMAKE_BUILD_TYPE=Release .. ``` -5. Compile Nebula Graph. +5. Compile NebulaGraph. !!! Note - Check [Prepare resources for compiling, installing, and running Nebula Graph](../1.resource-preparations.md). + Check [Prepare resources for compiling, installing, and running NebulaGraph](../1.resource-preparations.md). To speed up the compiling, use the `-j` option to set a concurrent number `N`. It should be $\min(\text{CPU}core number,\frac{the_memory_size(GB)}{2})$. @@ -64,7 +64,7 @@ Installing Nebula Graph from the source code allows you to customize the compili $ make -j{N} # E.g., make -j2 ``` -6. Install Nebula Graph. +6. Install NebulaGraph. ```bash $ sudo make install @@ -74,7 +74,7 @@ Installing Nebula Graph from the source code allows you to customize the compili ## Update the master branch -The source code of the master branch changes frequently. If the corresponding Nebula Graph release is installed, update it in the following steps. +The source code of the master branch changes frequently. If the corresponding NebulaGraph release is installed, update it in the following steps. 1. In the `nebula` directory, run `git pull upstream master` to update the source code. @@ -84,7 +84,7 @@ The source code of the master branch changes frequently. If the corresponding Ne - (Enterprise Edition)[Deploy license](../deploy-license.md) -- [Manage Nebula Graph services](../../2.quick-start/5.start-stop-service.md) +- [Manage NebulaGraph services](../../2.quick-start/5.start-stop-service.md) ## CMake variables @@ -106,31 +106,31 @@ The following CMake variables can be used at the configure (cmake) stage to adju ### ENABLE_TESTING -`ENABLE_TESTING` is `ON` by default and unit tests are built with the Nebula Graph services. If you just need the service modules, set it to `OFF`. +`ENABLE_TESTING` is `ON` by default and unit tests are built with the NebulaGraph services. If you just need the service modules, set it to `OFF`. ### ENABLE_ASAN -`ENABLE_ASAN` is `OFF` by default and the building of ASan (AddressSanitizer), a memory error detector, is disabled. To enable it, set `ENABLE_ASAN` to `ON`. This variable is intended for Nebula Graph developers. +`ENABLE_ASAN` is `OFF` by default and the building of ASan (AddressSanitizer), a memory error detector, is disabled. To enable it, set `ENABLE_ASAN` to `ON`. This variable is intended for NebulaGraph developers. ### CMAKE_BUILD_TYPE -Nebula Graph supports the following building types of `MAKE_BUILD_TYPE`: +NebulaGraph supports the following building types of `MAKE_BUILD_TYPE`: - `Debug` - The default value of `CMAKE_BUILD_TYPE`. It indicates building Nebula Graph with the debug info but not the optimization options. + The default value of `CMAKE_BUILD_TYPE`. It indicates building NebulaGraph with the debug info but not the optimization options. - `Release` - It indicates building Nebula Graph with the optimization options but not the debug info. + It indicates building NebulaGraph with the optimization options but not the debug info. - `RelWithDebInfo` - It indicates building Nebula Graph with the optimization options and the debug info. + It indicates building NebulaGraph with the optimization options and the debug info. - `MinSizeRel` - It indicates building Nebula Graph with the optimization options for controlling the code size but not the debug info. + It indicates building NebulaGraph with the optimization options for controlling the code size but not the debug info. ### ENABLE_INCLUDE_WHAT_YOU_USE @@ -155,7 +155,7 @@ $ cmake -DCMAKE_C_COMPILER= -DCMAKE_CXX_COMPILER= diff --git a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md index 6e341320379..1e9871ca875 100644 --- a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md +++ b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md @@ -1,6 +1,6 @@ -# Deploy Nebula Graph with Docker Compose +# Deploy NebulaGraph with Docker Compose -Using Docker Compose can quickly deploy Nebula Graph services based on the prepared configuration file. It is only recommended to use this method when testing functions of Nebula Graph. +Using Docker Compose can quickly deploy NebulaGraph services based on the prepared configuration file. It is only recommended to use this method when testing functions of NebulaGraph. ## Prerequisites @@ -12,19 +12,19 @@ Using Docker Compose can quickly deploy Nebula Graph services based on the prepa | Docker Compose | Latest | [Install Docker Compose](https://docs.docker.com/compose/install/) | | Git | Latest | [Download Git](https://git-scm.com/download/) | -* If you are deploying Nebula Graph as a non-root user, grant the user with Docker-related privileges. For detailed instructions, see [Manage Docker as a non-root user](https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user). +* If you are deploying NebulaGraph as a non-root user, grant the user with Docker-related privileges. For detailed instructions, see [Manage Docker as a non-root user](https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user). * You have started the Docker service on your host. -* If you have already deployed another version of Nebula Graph with Docker Compose on your host, to avoid compatibility issues, you need to delete the `nebula-docker-compose/data` directory. +* If you have already deployed another version of NebulaGraph with Docker Compose on your host, to avoid compatibility issues, you need to delete the `nebula-docker-compose/data` directory. -## How to deploy and connect to Nebula Graph +## How to deploy and connect to NebulaGraph 1. Clone the `{{dockercompose.release}}` branch of the `nebula-docker-compose` repository to your host with Git. !!! danger - The `master` branch contains the untested code for the latest Nebula Graph development release. **DO NOT** use this release in a production environment. + The `master` branch contains the untested code for the latest NebulaGraph development release. **DO NOT** use this release in a production environment. ```bash $ git clone -b {{dockercompose.branch}} https://github.com/vesoft-inc/nebula-docker-compose.git @@ -32,7 +32,7 @@ Using Docker Compose can quickly deploy Nebula Graph services based on the prepa !!! Note - The `x.y` version of Docker Compose aligns to the `x.y` version of Nebula Graph. For the Nebula Graph `z` version, Docker Compose does not publish the corresponding `z` version, but pulls the `z` version of the Nebula Graph image. + The `x.y` version of Docker Compose aligns to the `x.y` version of NebulaGraph. For the NebulaGraph `z` version, Docker Compose does not publish the corresponding `z` version, but pulls the `z` version of the NebulaGraph image. 2. Go to the `nebula-docker-compose` directory. @@ -40,13 +40,13 @@ Using Docker Compose can quickly deploy Nebula Graph services based on the prepa $ cd nebula-docker-compose/ ``` -3. Run the following command to start all the Nebula Graph services. +3. Run the following command to start all the NebulaGraph services. - Starting with 3.0.2, Nebula Graph comes with ARM64 Linux Docker images. You can run containerized Nebula Graph databases on Docker Desktop for ARM macOS or on ARM Linux servers. + Starting with 3.0.2, NebulaGraph comes with ARM64 Linux Docker images. You can run containerized NebulaGraph databases on Docker Desktop for ARM macOS or on ARM Linux servers. !!! Note - Update the [Nebula Graph images](#how_to_upgrade_or_update_the_docker_images_of_nebula_graph_services) and [Nebula Console images](#how_to_update_the_nebula_console_client) first if they are out of date. + Update the [NebulaGraph images](#how_to_upgrade_or_update_the_docker_images_of_nebula_graph_services) and [Nebula Console images](#how_to_update_the_nebula_console_client) first if they are out of date. ```bash [nebula-docker-compose]$ docker-compose up -d @@ -63,13 +63,13 @@ Using Docker Compose can quickly deploy Nebula Graph services based on the prepa !!! Note - For more information of the preceding services, see [Nebula Graph architecture](../../1.introduction/3.nebula-graph-architecture/1.architecture-overview.md). + For more information of the preceding services, see [NebulaGraph architecture](../../1.introduction/3.nebula-graph-architecture/1.architecture-overview.md). -4. Connect to Nebula Graph. +4. Connect to NebulaGraph. !!! Note - Starting from Nebula Graph version 3.1.0, nebula-docker-compose automatically starts a Nebula Console docker container and adds the storage host to the cluster (i.e. `ADD HOSTS` command). + Starting from NebulaGraph version 3.1.0, nebula-docker-compose automatically starts a Nebula Console docker container and adds the storage host to the cluster (i.e. `ADD HOSTS` command). 1. Run the following command to view the name of Nebula Console docker container. @@ -89,7 +89,7 @@ Using Docker Compose can quickly deploy Nebula Graph services based on the prepa / # ``` - 3. Connect to Nebula Graph with Nebula Console. + 3. Connect to NebulaGraph with Nebula Console. ```bash / # ./usr/local/bin/nebula-console -u -p --address=graphd --port=9669 @@ -114,9 +114,9 @@ Using Docker Compose can quickly deploy Nebula Graph services based on the prepa 5. Run `exit` twice to switch back to your terminal (shell). -## Check the Nebula Graph service status and ports +## Check the NebulaGraph service status and ports -Run `docker-compose ps` to list all the services of Nebula Graph and their status and ports. +Run `docker-compose ps` to list all the services of NebulaGraph and their status and ports. ```bash $ docker-compose ps @@ -133,11 +133,11 @@ nebuladockercompose_storaged1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0 nebuladockercompose_storaged2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49167->19779/tcp,:::49167->19779/tcp, 0.0.0.0:49164->19780/tcp,:::49164->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49170->9779/tcp,:::49170->9779/tcp, 9780/tcp ``` -Nebula Graph provides services to the clients through port `9669` by default. To use other ports, modify the `docker-compose.yaml` file in the `nebula-docker-compose` directory and restart the Nebula Graph services. +NebulaGraph provides services to the clients through port `9669` by default. To use other ports, modify the `docker-compose.yaml` file in the `nebula-docker-compose` directory and restart the NebulaGraph services. ## Check the service data and logs -All the data and logs of Nebula Graph are stored persistently in the `nebula-docker-compose/data` and `nebula-docker-compose/logs` directories. +All the data and logs of NebulaGraph are stored persistently in the `nebula-docker-compose/data` and `nebula-docker-compose/logs` directories. The structure of the directories is as follows: @@ -163,15 +163,15 @@ nebula-docker-compose/ └── storage2 ``` -## Stop the Nebula Graph services +## Stop the NebulaGraph services -You can run the following command to stop the Nebula Graph services: +You can run the following command to stop the NebulaGraph services: ```bash $ docker-compose down ``` -The following information indicates you have successfully stopped the Nebula Graph services: +The following information indicates you have successfully stopped the NebulaGraph services: ```bash Stopping nebuladockercompose_console_1 ... done @@ -199,11 +199,11 @@ Removing network nebuladockercompose_nebula-net !!! danger - The parameter `-v` in the command `docker-compose down -v` will **delete** all your local Nebula Graph storage data. Try this command if you are using the nightly release and having some compatibility issues. + The parameter `-v` in the command `docker-compose down -v` will **delete** all your local NebulaGraph storage data. Try this command if you are using the nightly release and having some compatibility issues. ## Modify configurations -The configuration file of Nebula Graph deployed by Docker Compose is `nebula-docker-compose/docker-compose.yaml`. To make the new configuration take effect, modify the configuration in this file and restart the service. +The configuration file of NebulaGraph deployed by Docker Compose is `nebula-docker-compose/docker-compose.yaml`. To make the new configuration take effect, modify the configuration in this file and restart the service. For more instructions, see [Configurations](../../5.configurations-and-logs/1.configurations/1.configurations.md). @@ -225,15 +225,15 @@ graphd: `9669:9669` indicates the internal port 9669 is uniformly mapped to external ports, while `19669` indicates the internal port 19669 is randomly mapped to external ports. -### How to upgrade or update the docker images of Nebula Graph services +### How to upgrade or update the docker images of NebulaGraph services 1. In the `nebula-docker-compose/docker-compose.yaml` file, change all the `image` values to the required image version. 2. In the `nebula-docker-compose` directory, run `docker-compose pull` to update the images of the Graph Service, Storage Service, Meta Service, and Nebula Console. -3. Run `docker-compose up -d` to start the Nebula Graph services again. +3. Run `docker-compose up -d` to start the NebulaGraph services again. -4. After connecting to Nebula Graph with Nebula Console, run `SHOW HOSTS GRAPH`, `SHOW HOSTS STORAGE`, or `SHOW HOSTS META` to check the version of the responding service respectively. +4. After connecting to NebulaGraph with Nebula Console, run `SHOW HOSTS GRAPH`, `SHOW HOSTS STORAGE`, or `SHOW HOSTS META` to check the version of the responding service respectively. ### `ERROR: toomanyrequests` when `docker-compose pull` @@ -245,10 +245,10 @@ You have met the rate limit of Docker Hub. Learn more on [Understanding Docker H ### How to update the Nebula Console client -The command `docker-compose pull` updates both the Nebula Graph services and the Nebula Console. +The command `docker-compose pull` updates both the NebulaGraph services and the Nebula Console. ## Related documents -- [Install and deploy Nebula Graph with the source code](1.install-nebula-graph-by-compiling-the-source-code.md) -- [Install Nebula Graph by RPM or DEB](2.install-nebula-graph-by-rpm-or-deb.md) -- [Connect to Nebula Graph](../connect-to-nebula-graph.md) +- [Install and deploy NebulaGraph with the source code](1.install-nebula-graph-by-compiling-the-source-code.md) +- [Install NebulaGraph by RPM or DEB](2.install-nebula-graph-by-rpm-or-deb.md) +- [Connect to NebulaGraph](../connect-to-nebula-graph.md) diff --git a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/4.install-nebula-graph-from-tar.md b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/4.install-nebula-graph-from-tar.md index c21eb8bde23..e5c4eea8c7c 100644 --- a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/4.install-nebula-graph-from-tar.md +++ b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/4.install-nebula-graph-from-tar.md @@ -1,14 +1,14 @@ -# Install Nebula graph with the tar.gz file +# Install NebulaGraph with the tar.gz file -You can install Nebula Graph by downloading the tar.gz file. +You can install NebulaGraph by downloading the tar.gz file. !!! note - Nebula Graph provides installing with the tar.gz file starting from version 2.6.0. + NebulaGraph provides installing with the tar.gz file starting from version 2.6.0. ## Installation steps -1. Download the Nebula Graph tar.gz file using the following address. +1. Download the NebulaGraph tar.gz file using the following address. Before downloading, you need to replace `` with the version you want to download. @@ -39,13 +39,13 @@ You can install Nebula Graph by downloading the tar.gz file. https://oss-cdn.nebula-graph.com.cn/package//nebula-graph-.ubuntu2004.amd64.tar.gz.sha256sum.txt ``` - For example, to download the Nebula Graph {{nebula.branch}} tar.gz file for `CentOS 7.5`, run the following command: + For example, to download the NebulaGraph {{nebula.branch}} tar.gz file for `CentOS 7.5`, run the following command: ```bash wget https://oss-cdn.nebula-graph.com.cn/package/{{nebula.release}}/nebula-graph-{{nebula.release}}.el7.x86_64.tar.gz ``` -2. Decompress the tar.gz file to the Nebula Graph installation directory. +2. Decompress the tar.gz file to the NebulaGraph installation directory. ```bash tar -xvzf -C @@ -62,12 +62,12 @@ You can install Nebula Graph by downloading the tar.gz file. 3. Modify the name of the configuration file. - Enter the decompressed directory, rename the files `nebula-graphd.conf.default`, `nebula-metad.conf.default`, and `nebula-storaged.conf.default` in the subdirectory `etc`, and delete `.default` to apply the default configuration of Nebula Graph. To modify the configuration, see [Configurations](../../5.configurations-and-logs/1.configurations/1.configurations.md). + Enter the decompressed directory, rename the files `nebula-graphd.conf.default`, `nebula-metad.conf.default`, and `nebula-storaged.conf.default` in the subdirectory `etc`, and delete `.default` to apply the default configuration of NebulaGraph. To modify the configuration, see [Configurations](../../5.configurations-and-logs/1.configurations/1.configurations.md). -So far, you have installed Nebula Graph successfully. +So far, you have installed NebulaGraph successfully. ## Next to do - (Enterprise Edition)[Deploy license](../deploy-license.md) -- [Manage Nebula Graph services](../manage-service.md) +- [Manage NebulaGraph services](../manage-service.md) diff --git a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/6.deploy-nebula-graph-with-peripherals.md b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/6.deploy-nebula-graph-with-peripherals.md index 0b2021828d9..a35a05397cc 100644 --- a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/6.deploy-nebula-graph-with-peripherals.md +++ b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/6.deploy-nebula-graph-with-peripherals.md @@ -1,16 +1,16 @@ -# Install Nebula Graph with ecosystem tools +# Install NebulaGraph with ecosystem tools -You can install the Enterprise Edition and Community Edition of Nebula Graph with the following ecosystem tools: +You can install the Enterprise Edition and Community Edition of NebulaGraph with the following ecosystem tools: - Nebula Dashboard Enterprise Edition - Nebula Operator ## Installation details -- To install Nebula Graph with **Nebula Dashboard Enterprise Edition**, see [Create a cluster](../../nebula-dashboard-ent/3.create-import-dashboard/1.create-cluster.md). -- To install Nebula Graph with **Nebula Operator**, see [Deploy Nebula Graph clusters with Kubectl](../../nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy Nebula Graph clusters with Helm](../../nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). +- To install NebulaGraph with **Nebula Dashboard Enterprise Edition**, see [Create a cluster](../../nebula-dashboard-ent/3.create-import-dashboard/1.create-cluster.md). +- To install NebulaGraph with **Nebula Operator**, see [Deploy NebulaGraph clusters with Kubectl](../../nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph clusters with Helm](../../nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). !!! note - Contact our sales ([inqury@vesoft.com](mailto:inqury@vesoft.com)) to get the installation package for the Enterprise Edition of Nebula Graph. + Contact our sales ([inqury@vesoft.com](mailto:inqury@vesoft.com)) to get the installation package for the Enterprise Edition of NebulaGraph. diff --git a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md index f029193b60b..4ed3ba7b222 100644 --- a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md +++ b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md @@ -1,6 +1,6 @@ -# Deploy a Nebula Graph cluster with RPM/DEB package on multiple servers +# Deploy a NebulaGraph cluster with RPM/DEB package on multiple servers -For now, Nebula Graph does not provide an official deployment tool. Users can deploy a Nebula Graph cluster with RPM or DEB package manually. This topic provides an example of deploying a Nebula Graph cluster on multiple servers (machines). +For now, NebulaGraph does not provide an official deployment tool. Users can deploy a NebulaGraph cluster with RPM or DEB package manually. This topic provides an example of deploying a NebulaGraph cluster on multiple servers (machines). ## Deployment @@ -19,19 +19,19 @@ For now, Nebula Graph does not provide an official deployment tool. Users can de ## Manual deployment process -### Step 1: Install Nebula Graph +### Step 1: Install NebulaGraph -Install Nebula Graph on each machine in the cluster. Available approaches of installation are as follows. +Install NebulaGraph on each machine in the cluster. Available approaches of installation are as follows. -* [Install Nebula Graph with RPM or DEB package](2.install-nebula-graph-by-rpm-or-deb.md) +* [Install NebulaGraph with RPM or DEB package](2.install-nebula-graph-by-rpm-or-deb.md) -* [Install Nebula Graph by compiling the source code](1.install-nebula-graph-by-compiling-the-source-code.md) +* [Install NebulaGraph by compiling the source code](1.install-nebula-graph-by-compiling-the-source-code.md) ### Step 2: Modify the configurations -To deploy Nebula Graph according to your requirements, you have to modify the configuration files. +To deploy NebulaGraph according to your requirements, you have to modify the configuration files. -All the configuration files for Nebula Graph, including `nebula-graphd.conf`, `nebula-metad.conf`, and `nebula-storaged.conf`, are stored in the `etc` directory in the installation path. You only need to modify the configuration for the corresponding service on the machines. The configurations that need to be modified for each machine are as follows. +All the configuration files for NebulaGraph, including `nebula-graphd.conf`, `nebula-metad.conf`, and `nebula-storaged.conf`, are stored in the `etc` directory in the installation path. You only need to modify the configuration for the corresponding service on the machines. The configurations that need to be modified for each machine are as follows. | Machine name | The configuration to be modified | | :----- | :--------------- | @@ -41,7 +41,7 @@ All the configuration files for Nebula Graph, including `nebula-graphd.conf`, `n | D | `nebula-graphd.conf`, `nebula-storaged.conf` | | E | `nebula-graphd.conf`, `nebula-storaged.conf` | -Users can refer to the content of the following configurations, which only show part of the cluster settings. The hidden content uses the default setting so that users can better understand the relationship between the servers in the Nebula Graph cluster. +Users can refer to the content of the following configurations, which only show part of the cluster settings. The hidden content uses the default setting so that users can better understand the relationship between the servers in the NebulaGraph cluster. !!! note @@ -267,7 +267,7 @@ Start the corresponding service on **each machine**. Descriptions are as follows | D | graphd, storaged | | E | graphd, storaged | -The command to start the Nebula Graph services is as follows. +The command to start the NebulaGraph services is as follows. ```bash sudo /usr/local/nebula/scripts/nebula.service start @@ -275,11 +275,11 @@ sudo /usr/local/nebula/scripts/nebula.service start !!! note - - Make sure all the processes of services on each machine are started. Otherwise, you will fail to start Nebula Graph. + - Make sure all the processes of services on each machine are started. Otherwise, you will fail to start NebulaGraph. - When the graphd process, the storaged process, and the metad process are all started, you can use `all` instead. - - `/usr/local/nebula` is the default installation path for Nebula Graph. Use the actual path if you have customized the path. For more information about how to start and stop the services, see [Manage Nebula Graph services](../manage-service.md). + - `/usr/local/nebula` is the default installation path for NebulaGraph. Use the actual path if you have customized the path. For more information about how to start and stop the services, see [Manage NebulaGraph services](../manage-service.md). ### Step 4: Check the cluster status @@ -289,7 +289,7 @@ Install the native CLI client [Nebula Console](../../2.quick-start/3.connect-to- $ ./nebula-console --addr 192.168.10.111 --port 9669 -u root -p nebula 2021/05/25 01:41:19 [INFO] connection pool is initialized successfully -Welcome to Nebula Graph! +Welcome to NebulaGraph! > ADD HOSTS 192.168.10.111:9779, 192.168.10.112:9779, 192.168.10.113:9779, 192.168.10.114:9779, 192.168.10.115:9779; > SHOW HOSTS; diff --git a/docs-2.0/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-from-200-to-latest.md b/docs-2.0/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-from-200-to-latest.md index da8b66fca9f..bc46c7ff096 100644 --- a/docs-2.0/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-from-200-to-latest.md +++ b/docs-2.0/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-from-200-to-latest.md @@ -1,16 +1,16 @@ -# Upgrade Nebula Graph v2.0.x to v{{nebula.release}} +# Upgrade NebulaGraph v2.0.x to v{{nebula.release}} -To upgrade Nebula Graph v2.0.x to v{{nebula.release}}, you only need to use the RPM/DEB package of v{{nebula.release}} for the upgrade, or [compile it](../2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md) and then reinstall. +To upgrade NebulaGraph v2.0.x to v{{nebula.release}}, you only need to use the RPM/DEB package of v{{nebula.release}} for the upgrade, or [compile it](../2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md) and then reinstall. !!! note - Nebula Graph v2.0.x refers to v2.0.0-GA and v2.0.1 releases. If your Nebula Graph version is too low (v2.0.0-RC, v2.0.0-beta, v1.x), see [Upgrade Nebula Graph to v{{nebula.release}}](upgrade-nebula-graph-to-latest.md). + NebulaGraph v2.0.x refers to v2.0.0-GA and v2.0.1 releases. If your NebulaGraph version is too low (v2.0.0-RC, v2.0.0-beta, v1.x), see [Upgrade NebulaGraph to v{{nebula.release}}](upgrade-nebula-graph-to-latest.md). ## Upgrade steps with RPM/DEB packages 1. Download the [RPM/DEB package](https://github.com/vesoft-inc/nebula-graph/releases/tag/v{{nebula.release}}). -2. Stop all Nebula Graph services. For details, see [Manage Nebula Graph Service](../../2.quick-start/5.start-stop-service.md). It is recommended to back up the configuration file before updating. +2. Stop all NebulaGraph services. For details, see [Manage NebulaGraph Service](../../2.quick-start/5.start-stop-service.md). It is recommended to back up the configuration file before updating. 3. Execute the following command to upgrade: @@ -32,13 +32,13 @@ To upgrade Nebula Graph v2.0.x to v{{nebula.release}}, you only need to use the $ sudo dpkg -i ``` -4. Start the required services on each server. For details, see [Manage Nebula Graph Service](../../2.quick-start/5.start-stop-service.md). +4. Start the required services on each server. For details, see [Manage NebulaGraph Service](../../2.quick-start/5.start-stop-service.md). ## Upgrade steps by compiling the new source code -1. Back up the old version of the configuration file. The configuration file is saved in the `etc` directory of the Nebula Graph installation path. +1. Back up the old version of the configuration file. The configuration file is saved in the `etc` directory of the NebulaGraph installation path. -2. Update the repository and compile the source code. For details, see [Install Nebula Graph by compiling the source code](../2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md). +2. Update the repository and compile the source code. For details, see [Install NebulaGraph by compiling the source code](../2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md). !!! note @@ -50,6 +50,6 @@ To upgrade Nebula Graph v2.0.x to v{{nebula.release}}, you only need to use the 2. Execute the command `docker-compose pull` in the directory `nebula-docker-compose` to update the images of all services. -3. Execute the command `docker-compose down` to stop the Nebula Graph service. +3. Execute the command `docker-compose down` to stop the NebulaGraph service. -4. Execute the command `docker-compose up -d` to start the Nebula Graph service. +4. Execute the command `docker-compose up -d` to start the NebulaGraph service. diff --git a/docs-2.0/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md b/docs-2.0/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md index 1b73de26141..17a805545e4 100644 --- a/docs-2.0/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md +++ b/docs-2.0/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md @@ -1,29 +1,29 @@ -# Upgrade Nebula Graph from version 2.x to {{nebula.release}} +# Upgrade NebulaGraph from version 2.x to {{nebula.release}} -This topic describes how to upgrade Nebula Graph from version 2.x to {{nebula.release}}, taking upgrading from version 2.6.1 to {{nebula.release}} as an example. +This topic describes how to upgrade NebulaGraph from version 2.x to {{nebula.release}}, taking upgrading from version 2.6.1 to {{nebula.release}} as an example. ## Applicable source versions -This topic applies to upgrading Nebula Graph from 2.0.0 and later 2.x versions to {{nebula.release}}. It does not apply to historical versions earlier than 2.0.0, including the 1.x versions. +This topic applies to upgrading NebulaGraph from 2.0.0 and later 2.x versions to {{nebula.release}}. It does not apply to historical versions earlier than 2.0.0, including the 1.x versions. -To upgrade Nebula Graph from historical versions to {{nebula.release}}: +To upgrade NebulaGraph from historical versions to {{nebula.release}}: 1. Upgrade it to the latest 2.x version according to the docs of that version. 2. Follow this topic to upgrade it to {{nebula.release}}. !!! caution - To upgrade Nebula Graph from versions earlier than 2.0.0 (including the 1.x versions) to {{nebula.release}}, you need to find the `date_time_zonespec.csv` in the `share/resources` directory of {{nebula.release}} files, and then copy it to the same directory in the Nebula Graph installation path. + To upgrade NebulaGraph from versions earlier than 2.0.0 (including the 1.x versions) to {{nebula.release}}, you need to find the `date_time_zonespec.csv` in the `share/resources` directory of {{nebula.release}} files, and then copy it to the same directory in the NebulaGraph installation path. ## Limitations -* Rolling Upgrade is not supported. You must stop all the Nebula Graph services before the upgrade. +* Rolling Upgrade is not supported. You must stop all the NebulaGraph services before the upgrade. * There is no upgrade script. You have to manually upgrade each server in the cluster. -* This topic does not apply to scenarios where Nebula Graph is deployed with Docker, including Docker Swarm, Docker Compose, and K8s. +* This topic does not apply to scenarios where NebulaGraph is deployed with Docker, including Docker Swarm, Docker Compose, and K8s. -* You must upgrade the old Nebula Graph services on the same machines they are deployed. **DO NOT** change the IP addresses, configuration files of the machines, and **DO NOT** change the cluster topology. +* You must upgrade the old NebulaGraph services on the same machines they are deployed. **DO NOT** change the IP addresses, configuration files of the machines, and **DO NOT** change the cluster topology. * The hard disk space left on each machine should be **two times** as much as the space taken by the original data directories. Half of the reserved space is for storing the manual backup of data. The other half is for storing the WALs that will be copied to the `dst_db_path` and the new keys supporting vertices with no tags. @@ -37,7 +37,7 @@ To upgrade Nebula Graph from historical versions to {{nebula.release}}: - Data swelling - The Nebula Graph 3.x version expands the original data format with one more key per vertex, so the data takes up more space after the upgrade. + The NebulaGraph 3.x version expands the original data format with one more key per vertex, so the data takes up more space after the upgrade. The format of the new key is: @@ -47,7 +47,7 @@ To upgrade Nebula Graph from historical versions to {{nebula.release}}: - Client compatibility - After the upgrade, you will not be able to connect to Nebula Graph from old clients. You will need to upgrade all clients to a version compatible with Nebula Graph {{nebula.release}}. + After the upgrade, you will not be able to connect to NebulaGraph from old clients. You will need to upgrade all clients to a version compatible with NebulaGraph {{nebula.release}}. - Configuration changes @@ -69,7 +69,7 @@ To upgrade Nebula Graph from historical versions to {{nebula.release}}: ## Preparations before the upgrade -- Download the TAR file of Nebula Graph {{nebula.release}} according to your operating system and system architecture. You need the binary files during the upgrade. Find the TAR file on [the download page](https://nebula-graph.io/download/). +- Download the TAR file of NebulaGraph {{nebula.release}} according to your operating system and system architecture. You need the binary files during the upgrade. Find the TAR file on [the download page](https://nebula-graph.io/download/). !!! note You can also get the new binaries from the source code or the RPM/DEB package. @@ -88,13 +88,13 @@ To upgrade Nebula Graph from historical versions to {{nebula.release}}: ## Upgrade steps -1. Stop all Nebula Graph services. +1. Stop all NebulaGraph services. ``` /scripts/nebula.service stop all ``` - `nebula_install_path` indicates the installation path of Nebula Graph. + `nebula_install_path` indicates the installation path of NebulaGraph. The storaged progress needs around 1 minute to flush data. You can run `nebula.service status all` to check if all services are stopped. For more information about starting and stopping services, see [Manage services](../manage-service.md). @@ -102,10 +102,10 @@ To upgrade Nebula Graph from historical versions to {{nebula.release}}: If the services are not fully stopped in 20 minutes, stop upgrading and ask for help on [the forum](https://discuss.nebula-graph.io/) or [Github](https://github.com/vesoft-inc/nebula/issues). -2. In the target path where you unpacked the TAR file, use the binaries in the `bin` directory to replace the old binaries in the `bin` directory in the Nebula Graph installation path. +2. In the target path where you unpacked the TAR file, use the binaries in the `bin` directory to replace the old binaries in the `bin` directory in the NebulaGraph installation path. !!! note - Update the binary of the corresponding service on each Nebula Graph server. + Update the binary of the corresponding service on each NebulaGraph server. 3. Modify the following parameters in all Graph configuration files to accommodate the value range of the new version. If the parameter values are within the specified range, skip this step. @@ -165,7 +165,7 @@ To upgrade Nebula Graph from historical versions to {{nebula.release}}: !!! note If the operation fails, stop the upgrade and ask for help on [the forum](https://discuss.nebula-graph.com.cn/) or [GitHub](https://github.com/vesoft-inc/nebula/issues). -7. Connect to the new version of Nebula Graph to verify that services are available and data are complete. For how to connect, see [Connect to Nebula Graph](../connect-to-nebula-graph.md). +7. Connect to the new version of NebulaGraph to verify that services are available and data are complete. For how to connect, see [Connect to NebulaGraph](../connect-to-nebula-graph.md). Currently, there is no official way to check whether the upgrade is successful. You can run the following reference statements to test the upgrade: @@ -184,15 +184,15 @@ To upgrade Nebula Graph from historical versions to {{nebula.release}}: ## Upgrade failure and rollback -If the upgrade fails, stop all Nebula Graph services of the new version, recover the old configuration files and binaries, and start the services of the old version. +If the upgrade fails, stop all NebulaGraph services of the new version, recover the old configuration files and binaries, and start the services of the old version. -All Nebula Graph clients in use must be switched to the old version. +All NebulaGraph clients in use must be switched to the old version. ## FAQ ### Can I write through the client during the upgrade? -A: No. You must stop all Nebula Graph services during the upgrade. +A: No. You must stop all NebulaGraph services during the upgrade. ### How to upgrade if a machine has only the Graph Service, but not the Storage Service? @@ -228,4 +228,4 @@ If the issue persists, ask for help on [the forum](https://discuss.nebula-graph. ### Why the job type changed after the upgrade, but job ID remains the same? -A: `SHOW JOBS` depends on an internal ID to identify job types, but in Nebula Graph 2.5.0 the internal ID changed in [this pull request](https://github.com/vesoft-inc/nebula-common/pull/562/files), so this issue happens after upgrading from a version earlier than 2.5.0. +A: `SHOW JOBS` depends on an internal ID to identify job types, but in NebulaGraph 2.5.0 the internal ID changed in [this pull request](https://github.com/vesoft-inc/nebula-common/pull/562/files), so this issue happens after upgrading from a version earlier than 2.5.0. diff --git a/docs-2.0/4.deployment-and-installation/4.uninstall-nebula-graph.md b/docs-2.0/4.deployment-and-installation/4.uninstall-nebula-graph.md index a3c03bd6df6..1a75f0d158d 100644 --- a/docs-2.0/4.deployment-and-installation/4.uninstall-nebula-graph.md +++ b/docs-2.0/4.deployment-and-installation/4.uninstall-nebula-graph.md @@ -1,22 +1,22 @@ -# Uninstall Nebula Graph +# Uninstall NebulaGraph -This topic describes how to uninstall Nebula Graph. +This topic describes how to uninstall NebulaGraph. !!! caution - Before re-installing Nebula Graph on a machine, follow this topic to completely uninstall the old Nebula Graph, in case the remaining data interferes with the new services, including inconsistencies between Meta services. + Before re-installing NebulaGraph on a machine, follow this topic to completely uninstall the old NebulaGraph, in case the remaining data interferes with the new services, including inconsistencies between Meta services. ## Prerequisite -The Nebula Graph services should be stopped before the uninstallation. For more information, see [Manage Nebula Graph services](../2.quick-start/5.start-stop-service.md). +The NebulaGraph services should be stopped before the uninstallation. For more information, see [Manage NebulaGraph services](../2.quick-start/5.start-stop-service.md). ## Step 1: Delete data files of the Storage and Meta Services -If you have modified the `data_path` in the configuration files for the Meta Service and Storage Service, the directories where Nebula Graph stores data may not be in the installation path of Nebula Graph. Check the configuration files to confirm the data paths, and then manually delete the directories to clear all data. +If you have modified the `data_path` in the configuration files for the Meta Service and Storage Service, the directories where NebulaGraph stores data may not be in the installation path of NebulaGraph. Check the configuration files to confirm the data paths, and then manually delete the directories to clear all data. !!! Note - For a Nebula Graph cluster, delete the data files of all Storage and Meta servers. + For a NebulaGraph cluster, delete the data files of all Storage and Meta servers. 1. Check the [Storage Service disk settings](../5.configurations-and-logs/1.configurations/4.storage-config.md#disk_configurations). For example: @@ -37,15 +37,15 @@ If you have modified the `data_path` in the configuration files for the Meta Ser Delete all installation directories, including the `cluster.id` file in them. -The default installation path is `/usr/local/nebula`, which is specified by `--prefix` while installing Nebula Graph. +The default installation path is `/usr/local/nebula`, which is specified by `--prefix` while installing NebulaGraph. -### Uninstall Nebula Graph deployed with source code +### Uninstall NebulaGraph deployed with source code -Find the installation directories of Nebula Graph, and delete them all. +Find the installation directories of NebulaGraph, and delete them all. -### Uninstall Nebula Graph deployed with RPM packages +### Uninstall NebulaGraph deployed with RPM packages -1. Run the following command to get the Nebula Graph version. +1. Run the following command to get the NebulaGraph version. ```bash $ rpm -qa | grep "nebula" @@ -57,7 +57,7 @@ Find the installation directories of Nebula Graph, and delete them all. nebula-graph-{{ nebula.release }}-1.x86_64 ``` -2. Run the following command to uninstall Nebula Graph. +2. Run the following command to uninstall NebulaGraph. ```bash sudo rpm -e @@ -71,9 +71,9 @@ Find the installation directories of Nebula Graph, and delete them all. 3. Delete the installation directories. -### Uninstall Nebula Graph deployed with DEB packages +### Uninstall NebulaGraph deployed with DEB packages -1. Run the following command to get the Nebula Graph version. +1. Run the following command to get the NebulaGraph version. ```bash $ dpkg -l | grep "nebula" @@ -85,7 +85,7 @@ Find the installation directories of Nebula Graph, and delete them all. ii nebula-graph {{ nebula.release }} amd64 Nebula Package built using CMake ``` -2. Run the following command to uninstall Nebula Graph. +2. Run the following command to uninstall NebulaGraph. ```bash sudo dpkg -r @@ -99,9 +99,9 @@ Find the installation directories of Nebula Graph, and delete them all. 3. Delete the installation directories. -### Uninstall Nebula Graph deployed with Docker Compose +### Uninstall NebulaGraph deployed with Docker Compose -1. In the `nebula-docker-compose` directory, run the following command to stop the Nebula Graph services. +1. In the `nebula-docker-compose` directory, run the following command to stop the NebulaGraph services. ```bash docker-compose down -v diff --git a/docs-2.0/4.deployment-and-installation/5.zone.md b/docs-2.0/4.deployment-and-installation/5.zone.md index d59efbada26..b6f8f70cdd2 100644 --- a/docs-2.0/4.deployment-and-installation/5.zone.md +++ b/docs-2.0/4.deployment-and-installation/5.zone.md @@ -1,12 +1,12 @@ # Manage zone -Nebula Graph supports the zone feature to manage Storage services in a cluster to achieve resource isolation, which is known as logical rack. +NebulaGraph supports the zone feature to manage Storage services in a cluster to achieve resource isolation, which is known as logical rack. ## Background !!! compatibility - From Nebula Graph version 3.0.0, the Storage services added in the configuration files **CANNOT** be read or written directly. The configuration files only register the Storage services into the Meta services. You must run the `ADD HOSTS` command to read and write data on Storage servers. + From NebulaGraph version 3.0.0, the Storage services added in the configuration files **CANNOT** be read or written directly. The configuration files only register the Storage services into the Meta services. You must run the `ADD HOSTS` command to read and write data on Storage servers. Users can add the Storage services to a zone. Users specify a zone when creating a graph space, the graph space will be created in all the Storage services of the zone. Partitions and replicas are evenly stored in each zone. As shown in the figure below. diff --git a/docs-2.0/4.deployment-and-installation/6.deploy-text-based-index/1.text-based-index-restrictions.md b/docs-2.0/4.deployment-and-installation/6.deploy-text-based-index/1.text-based-index-restrictions.md index 7772cf51a54..afed32dd83d 100644 --- a/docs-2.0/4.deployment-and-installation/6.deploy-text-based-index/1.text-based-index-restrictions.md +++ b/docs-2.0/4.deployment-and-installation/6.deploy-text-based-index/1.text-based-index-restrictions.md @@ -36,4 +36,4 @@ For now, full-text search has the following limitations: 15. It may take a while for Elasticsearch to create indexes. If Nebula Graph warns no index is found, wait for the index to take effect (however, the waiting time is unknown and there is no code to check). -16. Nebula Graph clusters deployed with K8s do not support the full-text search feature. +16. NebulaGraph clusters deployed with K8s do not support the full-text search feature. diff --git a/docs-2.0/4.deployment-and-installation/6.deploy-text-based-index/3.deploy-listener.md b/docs-2.0/4.deployment-and-installation/6.deploy-text-based-index/3.deploy-listener.md index 0784846e6ae..f454997b5f5 100644 --- a/docs-2.0/4.deployment-and-installation/6.deploy-text-based-index/3.deploy-listener.md +++ b/docs-2.0/4.deployment-and-installation/6.deploy-text-based-index/3.deploy-listener.md @@ -6,7 +6,7 @@ Full-text index data is written to the Elasticsearch cluster asynchronously. The * You have read and fully understood the [restrictions](../../4.deployment-and-installation/6.deploy-text-based-index/1.text-based-index-restrictions.md) for using full-text indexes. -* You have [deployed a Nebula Graph cluster](../2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md). +* You have [deployed a NebulaGraph cluster](../2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md). * You have [deploy a Elasticsearch cluster](./2.deploy-es.md). @@ -22,7 +22,7 @@ Full-text index data is written to the Elasticsearch cluster asynchronously. The ### Step 1: Install the Storage service -The Listener process and the storaged process use the same binary file. However, their configuration files and using ports are different. You can install Nebula Graph on all servers that need to deploy a Listener, but only the Storage service can be used. For details, see [Install Nebula Graph by RPM or DEB Package](../2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md). +The Listener process and the storaged process use the same binary file. However, their configuration files and using ports are different. You can install NebulaGraph on all servers that need to deploy a Listener, but only the Storage service can be used. For details, see [Install NebulaGraph by RPM or DEB Package](../2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md). ### Step 2: Prepare the configuration file for the Listener @@ -60,9 +60,9 @@ Run the following command to start the Listener. `${listener_config_path}` is the path where you store the Listener configuration file. -### Step 4: Add Listeners to Nebula Graph +### Step 4: Add Listeners to NebulaGraph -[Connect to Nebula Graph](../../2.quick-start/3.connect-to-nebula-graph.md) and run [`USE `](../../3.ngql-guide/9.space-statements/2.use-space.md) to enter the graph space that you want to create full-text indexes for. Then run the following statement to add a Listener into Nebula Graph. +[Connect to NebulaGraph](../../2.quick-start/3.connect-to-nebula-graph.md) and run [`USE `](../../3.ngql-guide/9.space-statements/2.use-space.md) to enter the graph space that you want to create full-text indexes for. Then run the following statement to add a Listener into NebulaGraph. ```ngql ADD LISTENER ELASTICSEARCH [,, ...] diff --git a/docs-2.0/4.deployment-and-installation/connect-to-nebula-graph.md b/docs-2.0/4.deployment-and-installation/connect-to-nebula-graph.md index cae94615bbc..44e9e3ac3dd 100644 --- a/docs-2.0/4.deployment-and-installation/connect-to-nebula-graph.md +++ b/docs-2.0/4.deployment-and-installation/connect-to-nebula-graph.md @@ -1,4 +1,4 @@ -# Connect to Nebula Graph +# Connect to NebulaGraph {% include "/source_connect-to-nebula-graph.md" %} diff --git a/docs-2.0/4.deployment-and-installation/deploy-license.md b/docs-2.0/4.deployment-and-installation/deploy-license.md index d653d95e926..9560c295577 100644 --- a/docs-2.0/4.deployment-and-installation/deploy-license.md +++ b/docs-2.0/4.deployment-and-installation/deploy-license.md @@ -1,6 +1,6 @@ -# Deploy a license for Nebula Graph Enterprise Edition +# Deploy a license for NebulaGraph Enterprise Edition -Nebula Graph Enterprise Edition requires the user to deploy a license file before starting the Enterprise Edition. This topic describes how to deploy a license file for the Enterprise Edition. +NebulaGraph Enterprise Edition requires the user to deploy a license file before starting the Enterprise Edition. This topic describes how to deploy a license file for the Enterprise Edition. !!! enterpriseonly @@ -8,7 +8,7 @@ Nebula Graph Enterprise Edition requires the user to deploy a license file befor ## Precautions -- If the license file is not deployed, Nebula Graph Enterprise Edition cannot be started. +- If the license file is not deployed, NebulaGraph Enterprise Edition cannot be started. - Do not modify the license file, otherwise the license will become invalid. @@ -60,19 +60,19 @@ The license file contains information such as `issuedDate` and `expirationDate`. |`organization`|The username.| |`issuedDate`|The date that the license is issued. | |`expirationDate`|The date that the license expires.| -|`product`|The product type. The product type of Nebula Graph is `nebula_graph`.| +|`product`|The product type. The product type of NebulaGraph is `nebula_graph`.| |`version`|The version information.| |`licenseType`|The license type, including `enterprise`, `samll_bussiness`, `pro`, and `individual`. | |`gracePeriod`| The buffer time (in days) for the service to continue to be used after the license expires, and the service will be stopped after the buffer period. The trial version of license has no buffer period after expiration and the default value of this parameter is 0. | -|`graphdSpec`| The max number of graph services in a cluster. Nebula Graph detects the number of active graph services in real-time. You are unable to connect to the cluster once the max number is reached. | -|`storagedSpec`| The max number of storage services in a cluster. Nebula Graph detects the number of active storage services in real-time. You are unable to connect to the cluster once the max number is reached. | +|`graphdSpec`| The max number of graph services in a cluster. NebulaGraph detects the number of active graph services in real-time. You are unable to connect to the cluster once the max number is reached. | +|`storagedSpec`| The max number of storage services in a cluster. NebulaGraph detects the number of active storage services in real-time. You are unable to connect to the cluster once the max number is reached. | |`clusterCode`| The user's hardware information, which is also the unique identifier of the cluster. This parameter is not available in the trial version of the license. | ## Deploy the license -1. Send email to `inquiry@vesoft.com` to apply for the Nebula Graph Enterprise Edition package. +1. Send email to `inquiry@vesoft.com` to apply for the NebulaGraph Enterprise Edition package. -2. Install Nebula Graph Enterprise Edition. The installation method is the same as the Community Edition. See [Install Nebula Graph with RPM or DEB package](2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md). +2. Install NebulaGraph Enterprise Edition. The installation method is the same as the Community Edition. See [Install NebulaGraph with RPM or DEB package](2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md). 3. Send email to `inquiry@vesoft.com` to apply for the license file `nebula.license`. @@ -82,11 +82,11 @@ The license file contains information such as `issuedDate` and `expirationDate`. For the upload address of the license file for ecosystem tools, refer to the document of [Ecosystem tools overview](../20.appendix/6.eco-tool-version.md). -## Renew a Nebula Graph Enterprise Edition license +## Renew a NebulaGraph Enterprise Edition license 1. Email us at `inqury@vesoft.com` to apply for a new license file `nebula.license`. 2. In `share/resources/` under the installation directory of each Meta service, replace the old license file with the new one. -3. Restart Storage and Graph services. For information about how to restart services, see [Start Nebula Graph](manage-service.md). If your license expires within the buffer period (14 days by default), you do not have to restart Storage and Graph services. +3. Restart Storage and Graph services. For information about how to restart services, see [Start NebulaGraph](manage-service.md). If your license expires within the buffer period (14 days by default), you do not have to restart Storage and Graph services. !!! note @@ -100,4 +100,4 @@ The license file contains information such as `issuedDate` and `expirationDate`. - View the License file with HTTP port - When the Nebula Graph cluster is running normally, you can view the license file with the HTTP port (default port is 19559) of the meta service. For example: `curl -G "http://192.168.10.101:19559/license"`. + When the NebulaGraph cluster is running normally, you can view the license file with the HTTP port (default port is 19559) of the meta service. For example: `curl -G "http://192.168.10.101:19559/license"`. diff --git a/docs-2.0/4.deployment-and-installation/manage-service.md b/docs-2.0/4.deployment-and-installation/manage-service.md index 0c1ddb05b1d..14790911fc8 100644 --- a/docs-2.0/4.deployment-and-installation/manage-service.md +++ b/docs-2.0/4.deployment-and-installation/manage-service.md @@ -1,4 +1,4 @@ -# Manage Nebula Graph Service +# Manage NebulaGraph Service {% include "/source_manage-service.md" %} diff --git a/docs-2.0/4.deployment-and-installation/manage-storage-host.md b/docs-2.0/4.deployment-and-installation/manage-storage-host.md index 823203f686e..ecd4d49f9e6 100644 --- a/docs-2.0/4.deployment-and-installation/manage-storage-host.md +++ b/docs-2.0/4.deployment-and-installation/manage-storage-host.md @@ -1,10 +1,10 @@ # Manage Storage hosts -Starting from Nebula Graph 3.0.0, setting Storage hosts in the configuration files only registers the hosts on the Meta side, but does not add them into the cluster. You must run the `ADD HOSTS` statement to add the Storage hosts. +Starting from NebulaGraph 3.0.0, setting Storage hosts in the configuration files only registers the hosts on the Meta side, but does not add them into the cluster. You must run the `ADD HOSTS` statement to add the Storage hosts. ## Add Storage hosts -Add the Storage hosts to a Nebula Graph cluster. +Add the Storage hosts to a NebulaGraph cluster. ```ngql ADD HOSTS : [,: ...]; diff --git a/docs-2.0/4.deployment-and-installation/standalone-deployment.md b/docs-2.0/4.deployment-and-installation/standalone-deployment.md index 28f7cfaf231..a96d9efe81f 100644 --- a/docs-2.0/4.deployment-and-installation/standalone-deployment.md +++ b/docs-2.0/4.deployment-and-installation/standalone-deployment.md @@ -1,14 +1,14 @@ -# Standalone Nebula Graph +# Standalone NebulaGraph -Standalone Nebula Graph merges the Meta, Storage, and Graph services into a single process deployed on a single machine. This topic introduces scenarios, deployment steps, etc. of standalone Nebula Graph. +Standalone NebulaGraph merges the Meta, Storage, and Graph services into a single process deployed on a single machine. This topic introduces scenarios, deployment steps, etc. of standalone NebulaGraph. !!! danger - Do not use standalone Nebula Graph in production environments. + Do not use standalone NebulaGraph in production environments. ## Background -The traditional Nebula Graph consists of three services, each service having executable binary files and the corresponding process. Processes communicate with each other by RPC. In standalone Nebula Graph, the three processes corresponding to the three services are combined into one process. For more information about Nebula Graph, see [Architecture overview](../1.introduction/3.nebula-graph-architecture/1.architecture-overview.md). +The traditional NebulaGraph consists of three services, each service having executable binary files and the corresponding process. Processes communicate with each other by RPC. In standalone NebulaGraph, the three processes corresponding to the three services are combined into one process. For more information about NebulaGraph, see [Architecture overview](../1.introduction/3.nebula-graph-architecture/1.architecture-overview.md). ## Scenarios @@ -21,25 +21,25 @@ Small data sizes and low availability requirements. For example, test environmen ## Resource requirements -For information about the resource requirements for standalone Nebula Graph, see [Software requirements for compiling Nebula Graph](1.resource-preparations.md). +For information about the resource requirements for standalone NebulaGraph, see [Software requirements for compiling NebulaGraph](1.resource-preparations.md). ## Steps -Currently, you can only install standalone Nebula Graph with the source code. The steps are similar to those of the multi-process Nebula Graph. You only need to modify the step **Generate Makefile with CMake** by adding `-DENABLE_STANDALONE_VERSION=on` to the command. For example: +Currently, you can only install standalone NebulaGraph with the source code. The steps are similar to those of the multi-process NebulaGraph. You only need to modify the step **Generate Makefile with CMake** by adding `-DENABLE_STANDALONE_VERSION=on` to the command. For example: ```bash cmake -DCMAKE_INSTALL_PREFIX=/usr/local/nebula -DENABLE_TESTING=OFF -DENABLE_STANDALONE_VERSION=on -DCMAKE_BUILD_TYPE=Release .. ``` -For more information about installation details, see [Install Nebula Graph by compiling the source code](2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md). +For more information about installation details, see [Install NebulaGraph by compiling the source code](2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md). -After installing standalone Nebula Graph, see the topic [connect to Service](connect-to-nebula-graph.md) to connect to Nebula Graph databases. +After installing standalone NebulaGraph, see the topic [connect to Service](connect-to-nebula-graph.md) to connect to NebulaGraph databases. ## Configuration file -The path to the configuration file for standalone Nebula Graph is `/usr/local/nebula/etc` by default. +The path to the configuration file for standalone NebulaGraph is `/usr/local/nebula/etc` by default. -You can run `sudo cat nebula-standalone.conf.default` to see the file content. The parameters and the corresponding descriptions in the file are generally the same as the configurations for multi-process Nebula Graph except for the following parameters. +You can run `sudo cat nebula-standalone.conf.default` to see the file content. The parameters and the corresponding descriptions in the file are generally the same as the configurations for multi-process NebulaGraph except for the following parameters. | Parameter | Predefined value | Description | | ---------------- | ----------- | --------------------- | diff --git a/docs-2.0/5.configurations-and-logs/1.configurations/.1.get-configurations.md b/docs-2.0/5.configurations-and-logs/1.configurations/.1.get-configurations.md index e228aa7b2af..f49843221fd 100644 --- a/docs-2.0/5.configurations-and-logs/1.configurations/.1.get-configurations.md +++ b/docs-2.0/5.configurations-and-logs/1.configurations/.1.get-configurations.md @@ -1,6 +1,6 @@ # Get configurations -This document gives some methods to get configurations in Nebula Graph. +This document gives some methods to get configurations in NebulaGraph. !!! note @@ -8,19 +8,19 @@ This document gives some methods to get configurations in Nebula Graph. ## Get configurations from local -Add `--local_config=true` to the top of each configuration file (the default path is `/usr/local/nebula/etc/`). Restart all the Nebula Graph services to make your modifications take effect. We suggest that new users use this method. +Add `--local_config=true` to the top of each configuration file (the default path is `/usr/local/nebula/etc/`). Restart all the NebulaGraph services to make your modifications take effect. We suggest that new users use this method. ## Get configuration from Meta Service To get configuration from Meta Service, set the `--local_config` parameter to `false` or use the default configuration files. -When the services are started for the first time, Nebula Graph reads the configurations from local and then persists them in the Meta Service. Once the Meta Service is persisted, Nebula Graph reads configurations only from the Meta Service, even you restart Nebula Graph. +When the services are started for the first time, NebulaGraph reads the configurations from local and then persists them in the Meta Service. Once the Meta Service is persisted, NebulaGraph reads configurations only from the Meta Service, even you restart NebulaGraph. ## FAQ ## How to modify configurations -You can modify Nebula Graph configurations by using these methods: +You can modify NebulaGraph configurations by using these methods: - Modify configurations by using `UPDATE CONFIG`. For more information see UPDATE CONFIG (doc TODO). - Modify configurations by configuring the configuration files. For more information, see [Get configuration from local](#get_configuration_from_local). diff --git a/docs-2.0/5.configurations-and-logs/1.configurations/1.configurations.md b/docs-2.0/5.configurations-and-logs/1.configurations/1.configurations.md index e3eb664e01e..528b8aaf115 100644 --- a/docs-2.0/5.configurations-and-logs/1.configurations/1.configurations.md +++ b/docs-2.0/5.configurations-and-logs/1.configurations/1.configurations.md @@ -1,6 +1,6 @@ # Configurations -Nebula Graph builds the configurations based on the [gflags](https://gflags.github.io/gflags/) repository. Most configurations are flags. When the Nebula Graph service starts, it will get the configuration information from [Configuration files](#configuration_files) by default. Configurations that are not in the file apply the default values. +NebulaGraph builds the configurations based on the [gflags](https://gflags.github.io/gflags/) repository. Most configurations are flags. When the NebulaGraph service starts, it will get the configuration information from [Configuration files](#configuration_files) by default. Configurations that are not in the file apply the default values. !!! enterpriseonly @@ -8,7 +8,7 @@ Nebula Graph builds the configurations based on the [gflags](https://gflags.gith !!! note - * Because there are many configurations and they may change as Nebula Graph develops, this topic will not introduce all configurations. To get detailed descriptions of configurations, follow the instructions below. + * Because there are many configurations and they may change as NebulaGraph develops, this topic will not introduce all configurations. To get detailed descriptions of configurations, follow the instructions below. * It is not recommended to modify the configurations that are not introduced in this topic, unless you are familiar with the source code and fully understand the function of configurations. !!! compatibility "Legacy version compatibility" @@ -36,7 +36,7 @@ $ /usr/local/nebula/bin/nebula-graphd --help $ /usr/local/nebula/bin/nebula-storaged --help ``` -The above examples use the default storage path `/usr/local/nebula/bin/`. If you modify the installation path of Nebula Graph, use the actual path to query the configurations. +The above examples use the default storage path `/usr/local/nebula/bin/`. If you modify the installation path of NebulaGraph, use the actual path to query the configurations. ## Get configurations @@ -63,27 +63,27 @@ curl 127.0.0.1:19779/flags ### Configuration files for clusters installed from source, with an RPM/DEB package, or a TAR package -Nebula Graph provides two initial configuration files for each service, `.conf.default` and `.conf.production`. You can use them in different scenarios conveniently. For clusters installed from source and with a RPM/DEB package, the default path is `/usr/local/nebula/etc/`. For clusters installed with a TAR package, the path is `//etc`. +NebulaGraph provides two initial configuration files for each service, `.conf.default` and `.conf.production`. You can use them in different scenarios conveniently. For clusters installed from source and with a RPM/DEB package, the default path is `/usr/local/nebula/etc/`. For clusters installed with a TAR package, the path is `//etc`. The configuration values in the initial configuration file are for reference only and can be adjusted according to actual needs. To use the initial configuration file, choose one of the above two files and delete the suffix `.default` or `.production` to make it valid. !!! caution - To ensure the availability of services, the configurations of the same service must be consistent, except for the local IP address `local_ip`. For example, three Storage servers are deployed in one Nebula Graph cluster. The configurations of the three Storage servers need to be the same, except for the IP address. + To ensure the availability of services, the configurations of the same service must be consistent, except for the local IP address `local_ip`. For example, three Storage servers are deployed in one NebulaGraph cluster. The configurations of the three Storage servers need to be the same, except for the IP address. The initial configuration files corresponding to each service are as follows. -| Nebula Graph service | Initial configuration file | Description | +| NebulaGraph service | Initial configuration file | Description | | - | - | - | | Meta | `nebula-metad.conf.default` and `nebula-metad.conf.production` | [Meta service configuration](2.meta-config.md) | | Graph | `nebula-graphd.conf.default` and `nebula-graphd.conf.production` | [Graph service configuration](3.graph-config.md) | | Storage | `nebula-storaged.conf.default` and `nebula-storaged.conf.production` | [Storage service configuration](4.storage-config.md) | -Each initial configuration file of all services contains `local_config`. The default value is `true`, which means that the Nebula Graph service will get configurations from its configuration files and start it. +Each initial configuration file of all services contains `local_config`. The default value is `true`, which means that the NebulaGraph service will get configurations from its configuration files and start it. !!! caution - It is not recommended to modify the value of `local_config` to `false`. If modified, the Nebula Graph service will first read the cached configurations, which may cause configuration inconsistencies between clusters and cause unknown risks. + It is not recommended to modify the value of `local_config` to `false`. If modified, the NebulaGraph service will first read the cached configurations, which may cause configuration inconsistencies between clusters and cause unknown risks. ### Configuration files for clusters installed with Docker Compose @@ -100,13 +100,13 @@ For clusters installed with Kubectl through Nebula Operator, the configuration f ## Modify configurations -By default, each Nebula Graph service gets configured from its configuration files. You can modify configurations and make them valid according to the following steps: +By default, each NebulaGraph service gets configured from its configuration files. You can modify configurations and make them valid according to the following steps: * For clusters installed from source, with a RPM/DEB, or a TAR package 1. Use a text editor to modify the configuration files of the target service and save the modification. - 2. Choose an appropriate time to restart **all** Nebula Graph services to make the modifications valid. + 2. Choose an appropriate time to restart **all** NebulaGraph services to make the modifications valid. * For clusters installed with Docker Compose @@ -115,4 +115,4 @@ By default, each Nebula Graph service gets configured from its configuration fil * For clusters installed with Kubectl - For details, see [Customize configuration parameters for a Nebula Graph cluster](../../nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md). + For details, see [Customize configuration parameters for a NebulaGraph cluster](../../nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md). diff --git a/docs-2.0/5.configurations-and-logs/1.configurations/2.meta-config.md b/docs-2.0/5.configurations-and-logs/1.configurations/2.meta-config.md index 055c808f621..b1cb1434a9d 100644 --- a/docs-2.0/5.configurations-and-logs/1.configurations/2.meta-config.md +++ b/docs-2.0/5.configurations-and-logs/1.configurations/2.meta-config.md @@ -1,10 +1,10 @@ # Meta Service configuration -Nebula Graph provides two initial configuration files for the Meta Service, `nebula-metad.conf.default` and `nebula-metad.conf.production`. Users can use them in different scenarios conveniently. The default file path is `/usr/local/nebula/etc/`. +NebulaGraph provides two initial configuration files for the Meta Service, `nebula-metad.conf.default` and `nebula-metad.conf.production`. Users can use them in different scenarios conveniently. The default file path is `/usr/local/nebula/etc/`. !!! caution - * It is not recommended to modify the value of `local_config` to `false`. If modified, the Nebula Graph service will first read the cached configurations, which may cause configuration inconsistencies between clusters and cause unknown risks. + * It is not recommended to modify the value of `local_config` to `false`. If modified, the NebulaGraph service will first read the cached configurations, which may cause configuration inconsistencies between clusters and cause unknown risks. * It is not recommended to modify the configurations that are not introduced in this topic, unless you are familiar with the source code and fully understand the function of configurations. ## How to use the configuration files @@ -13,7 +13,7 @@ To use the initial configuration file, choose one of the above two files and del ## About parameter values -If a parameter is not set in the configuration file, Nebula Graph uses the default value. Not all parameters are predefined. And the predefined parameters in the two initial configuration files are different. This topic uses the parameters in `nebula-metad.conf.default`. +If a parameter is not set in the configuration file, NebulaGraph uses the default value. Not all parameters are predefined. And the predefined parameters in the two initial configuration files are different. This topic uses the parameters in `nebula-metad.conf.default`. For all parameters and their current values, see [Configurations](1.configurations.md). @@ -23,20 +23,20 @@ For all parameters and their current values, see [Configurations](1.configuratio | ----------- | ----------------------- | ---------------------------------------------------- | | `daemonize` | `true` | When set to `true`, the process is a daemon process. | | `pid_file` | `pids/nebula-metad.pid` | The file that records the process ID. | -| `timezone_name` | - | Specifies the Nebula Graph time zone. This parameter is not predefined in the initial configuration files. You can manually set it if you need it. The system default value is `UTC+00:00:00`. For the format of the parameter value, see [Specifying the Time Zone with TZ](https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html "Click to view the timezone-related content in the GNU C Library manual"). For example, `--timezone_name=UTC+08:00` represents the GMT+8 time zone.| -|`license_path`|`share/resources/nebula.license`| Path of the license of the Nebula Graph Enterprise Edition. Users need to [deploy a license file](../../4.deployment-and-installation/deploy-license.md) before starting the Enterprise Edition. This parameter is required only for the Nebula Graph Enterprise Edition. For details about how to configure licenses for other ecosystem tools, see the deployment documents of the corresponding ecosystem tools.| +| `timezone_name` | - | Specifies the NebulaGraph time zone. This parameter is not predefined in the initial configuration files. You can manually set it if you need it. The system default value is `UTC+00:00:00`. For the format of the parameter value, see [Specifying the Time Zone with TZ](https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html "Click to view the timezone-related content in the GNU C Library manual"). For example, `--timezone_name=UTC+08:00` represents the GMT+8 time zone.| +|`license_path`|`share/resources/nebula.license`| Path of the license of the NebulaGraph Enterprise Edition. Users need to [deploy a license file](../../4.deployment-and-installation/deploy-license.md) before starting the Enterprise Edition. This parameter is required only for the NebulaGraph Enterprise Edition. For details about how to configure licenses for other ecosystem tools, see the deployment documents of the corresponding ecosystem tools.| !!! note - * While inserting property values of [time types](../../3.ngql-guide/3.data-types/4.date-and-time.md), Nebula Graph transforms time types (except TIMESTAMP) to the corresponding UTC according to the time zone specified by `timezone_name`. The time-type values returned by nGQL queries are all UTC time. - * `timezone_name` is only used to transform the data stored in Nebula Graph. Other time-related data of the Nebula Graph processes still uses the default time zone of the host, such as the log printing time. + * While inserting property values of [time types](../../3.ngql-guide/3.data-types/4.date-and-time.md), NebulaGraph transforms time types (except TIMESTAMP) to the corresponding UTC according to the time zone specified by `timezone_name`. The time-type values returned by nGQL queries are all UTC time. + * `timezone_name` is only used to transform the data stored in NebulaGraph. Other time-related data of the NebulaGraph processes still uses the default time zone of the host, such as the log printing time. ## Logging configurations | Name | Predefined value | Description | | :------------- | :------------------------ | :------------------------------------------------ | | `log_dir` | `logs` | The directory that stores the Meta Service log. It is recommended to put logs on a different hard disk from the data. | -| `minloglevel` | `0` | Specifies the minimum level of the log. That is, no logs below this level will be printed. Optional values are `0` (INFO), `1` (WARNING), `2` (ERROR), `3` (FATAL). It is recommended to set it to `0` during debugging and `1` in a production environment. If it is set to `4`, Nebula Graph will not print any logs. | +| `minloglevel` | `0` | Specifies the minimum level of the log. That is, no logs below this level will be printed. Optional values are `0` (INFO), `1` (WARNING), `2` (ERROR), `3` (FATAL). It is recommended to set it to `0` during debugging and `1` in a production environment. If it is set to `4`, NebulaGraph will not print any logs. | | `v` | `0` | Specifies the detailed level of the log. The larger the value, the more detailed the log is. Optional values are `0`, `1`, `2`, `3`. | | `logbufsecs` | `0` | Specifies the maximum time to buffer the logs. If there is a timeout, it will output the buffered log to the log file. `0` means real-time output. This configuration is measured in seconds. | |`redirect_stdout` |`true` | When set to `true`, the process redirects the`stdout` and `stderr` to separate output files. | @@ -55,7 +55,7 @@ For all parameters and their current values, see [Configurations](1.configuratio | `ws_ip` | `0.0.0.0` | Specifies the IP address for the HTTP service. | | `ws_http_port` | `19559` | Specifies the port for the HTTP service. | |`ws_storage_http_port`|`19779`| Specifies the Storage service listening port used by the HTTP protocol. It must be consistent with the `ws_http_port` in the Storage service configuration file.| -|`heartbeat_interval_secs` | `10` | Specifies the default heartbeat interval. Make sure the `heartbeat_interval_secs` values for all services are the same, otherwise Nebula Graph **CANNOT** work normally. This configuration is measured in seconds. | +|`heartbeat_interval_secs` | `10` | Specifies the default heartbeat interval. Make sure the `heartbeat_interval_secs` values for all services are the same, otherwise NebulaGraph **CANNOT** work normally. This configuration is measured in seconds. | !!! caution diff --git a/docs-2.0/5.configurations-and-logs/1.configurations/3.graph-config.md b/docs-2.0/5.configurations-and-logs/1.configurations/3.graph-config.md index 2212b9c0c9a..f8b25eed151 100644 --- a/docs-2.0/5.configurations-and-logs/1.configurations/3.graph-config.md +++ b/docs-2.0/5.configurations-and-logs/1.configurations/3.graph-config.md @@ -1,10 +1,10 @@ # Graph Service configuration -Nebula Graph provides two initial configuration files for the Graph Service, `nebula-graphd.conf.default` and `nebula-graphd.conf.production`. Users can use them in different scenarios conveniently. The default file path is `/usr/local/nebula/etc/`. +NebulaGraph provides two initial configuration files for the Graph Service, `nebula-graphd.conf.default` and `nebula-graphd.conf.production`. Users can use them in different scenarios conveniently. The default file path is `/usr/local/nebula/etc/`. !!! caution - * It is not recommended to modify the value of `local_config` to `false`. If modified, the Nebula Graph service will first read the cached configurations, which may cause configuration inconsistencies between clusters and cause unknown risks. + * It is not recommended to modify the value of `local_config` to `false`. If modified, the NebulaGraph service will first read the cached configurations, which may cause configuration inconsistencies between clusters and cause unknown risks. * It is not recommended to modify the configurations that are not introduced in this topic, unless you are familiar with the source code and fully understand the function of configurations. ## How to use the configuration files @@ -13,7 +13,7 @@ To use the initial configuration file, choose one of the above two files and del ## About parameter values -If a parameter is not set in the configuration file, Nebula Graph uses the default value. Not all parameters are predefined. And the predefined parameters in the two initial configuration files are different. This topic uses the parameters in `nebula-metad.conf.default`. +If a parameter is not set in the configuration file, NebulaGraph uses the default value. Not all parameters are predefined. And the predefined parameters in the two initial configuration files are different. This topic uses the parameters in `nebula-metad.conf.default`. For all parameters and their current values, see [Configurations](1.configurations.md). @@ -24,20 +24,20 @@ For all parameters and their current values, see [Configurations](1.configuratio | `daemonize` | `true` | When set to `true`, the process is a daemon process. | | `pid_file` | `pids/nebula-graphd.pid`| The file that records the process ID. | |`enable_optimizer` |`true` | When set to `true`, the optimizer is enabled. | -| `timezone_name` | - | Specifies the Nebula Graph time zone. This parameter is not predefined in the initial configuration files. The system default value is `UTC+00:00:00`. For the format of the parameter value, see [Specifying the Time Zone with TZ](https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html "Click to view the timezone-related content in the GNU C Library manual"). For example, `--timezone_name=UTC+08:00` represents the GMT+8 time zone. | +| `timezone_name` | - | Specifies the NebulaGraph time zone. This parameter is not predefined in the initial configuration files. The system default value is `UTC+00:00:00`. For the format of the parameter value, see [Specifying the Time Zone with TZ](https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html "Click to view the timezone-related content in the GNU C Library manual"). For example, `--timezone_name=UTC+08:00` represents the GMT+8 time zone. | | `local_config` | `true` | When set to `true`, the process gets configurations from the configuration files. | !!! note - * While inserting property values of [time types](../../3.ngql-guide/3.data-types/4.date-and-time.md), Nebula Graph transforms time types (except TIMESTAMP) to the corresponding UTC according to the time zone specified by `timezone_name`. The time-type values returned by nGQL queries are all UTC time. - * `timezone_name` is only used to transform the data stored in Nebula Graph. Other time-related data of the Nebula Graph processes still uses the default time zone of the host, such as the log printing time. + * While inserting property values of [time types](../../3.ngql-guide/3.data-types/4.date-and-time.md), NebulaGraph transforms time types (except TIMESTAMP) to the corresponding UTC according to the time zone specified by `timezone_name`. The time-type values returned by nGQL queries are all UTC time. + * `timezone_name` is only used to transform the data stored in NebulaGraph. Other time-related data of the NebulaGraph processes still uses the default time zone of the host, such as the log printing time. ## Logging configurations | Name | Predefined value | Description | | ------------- | ------------------------ | ------------------------------------------------ | | `log_dir` | `logs` | The directory that stores the Meta Service log. It is recommended to put logs on a different hard disk from the data. | -| `minloglevel` | `0` | Specifies the minimum level of the log. That is, no logs below this level will be printed. Optional values are `0` (INFO), `1` (WARNING), `2` (ERROR), `3` (FATAL). It is recommended to set it to `0` during debugging and `1` in a production environment. If it is set to `4`, Nebula Graph will not print any logs. | +| `minloglevel` | `0` | Specifies the minimum level of the log. That is, no logs below this level will be printed. Optional values are `0` (INFO), `1` (WARNING), `2` (ERROR), `3` (FATAL). It is recommended to set it to `0` during debugging and `1` in a production environment. If it is set to `4`, NebulaGraph will not print any logs. | | `v` | `0` | Specifies the detailed level of the log. The larger the value, the more detailed the log is. Optional values are `0`, `1`, `2`, `3`. | | `logbufsecs` | `0` | Specifies the maximum time to buffer the logs. If there is a timeout, it will output the buffered log to the log file. `0` means real-time output. This configuration is measured in seconds. | |`redirect_stdout` |`true` | When set to `true`, the process redirects the`stdout` and `stderr` to separate output files. | @@ -71,7 +71,7 @@ For all parameters and their current values, see [Configurations](1.configuratio |`num_worker_threads` |`0` | Specifies the number of threads that execute queries. `0` is the number of CPU cores. | | `ws_ip` | `0.0.0.0` | Specifies the IP address for the HTTP service. | | `ws_http_port` | `19669` | Specifies the port for the HTTP service. | -|`heartbeat_interval_secs` | `10` | Specifies the default heartbeat interval. Make sure the `heartbeat_interval_secs` values for all services are the same, otherwise Nebula Graph **CANNOT** work normally. This configuration is measured in seconds. | +|`heartbeat_interval_secs` | `10` | Specifies the default heartbeat interval. Make sure the `heartbeat_interval_secs` values for all services are the same, otherwise NebulaGraph **CANNOT** work normally. This configuration is measured in seconds. | |`storage_client_timeout_ms` |-| Specifies the RPC connection timeout threshold between the Graph Service and the Storage Service. This parameter is not predefined in the initial configuration files. You can manually set it if you need it. The system default value is `60000`ms. | |`ws_meta_http_port` |`19559`| Specifies the Meta service listening port used by the HTTP protocol. It must be consistent with the `ws_http_port` in the Meta service configuration file.| @@ -97,7 +97,7 @@ For all parameters and their current values, see [Configurations](1.configuratio | Name | Predefined value | Description | | ------------------- | ------------------------ | ------------------------------------------ | -| `system_memory_high_watermark_ratio` | `0.8` | Specifies the trigger threshold of the high-level memory alarm mechanism. If the system memory usage is higher than this value, an alarm mechanism will be triggered, and Nebula Graph will stop querying. This parameter is not predefined in the initial configuration files. | +| `system_memory_high_watermark_ratio` | `0.8` | Specifies the trigger threshold of the high-level memory alarm mechanism. If the system memory usage is higher than this value, an alarm mechanism will be triggered, and NebulaGraph will stop querying. This parameter is not predefined in the initial configuration files. | ## Audit configurations @@ -111,7 +111,7 @@ For more information about audit log, see [Audit log](../2.log-management/audit- | Name | Predefined value | Description | | - | - | - | -| `enable_space_level_metrics` | `false` | Enable or disable space-level metrics. Such metric names contain the name of the graph space that it monitors, for example, `query_latency_us{space=basketballplayer}.avg.3600`. You can view the supported metrics with the `curl` command. For more information, see [Query Nebula Graph metrics](../../6.monitor-and-metrics/1.query-performance-metrics.md). | +| `enable_space_level_metrics` | `false` | Enable or disable space-level metrics. Such metric names contain the name of the graph space that it monitors, for example, `query_latency_us{space=basketballplayer}.avg.3600`. You can view the supported metrics with the `curl` command. For more information, see [Query NebulaGraph metrics](../../6.monitor-and-metrics/1.query-performance-metrics.md). | ## session configurations diff --git a/docs-2.0/5.configurations-and-logs/1.configurations/4.storage-config.md b/docs-2.0/5.configurations-and-logs/1.configurations/4.storage-config.md index cb239ce768e..1e77a5798a3 100644 --- a/docs-2.0/5.configurations-and-logs/1.configurations/4.storage-config.md +++ b/docs-2.0/5.configurations-and-logs/1.configurations/4.storage-config.md @@ -1,10 +1,10 @@ # Storage Service configurations -Nebula Graph provides two initial configuration files for the Storage Service, `nebula-storaged.conf.default` and `nebula-storaged.conf.production`. Users can use them in different scenarios conveniently. The default file path is `/usr/local/nebula/etc/`. +NebulaGraph provides two initial configuration files for the Storage Service, `nebula-storaged.conf.default` and `nebula-storaged.conf.production`. Users can use them in different scenarios conveniently. The default file path is `/usr/local/nebula/etc/`. !!! caution - * It is not recommended to modify the value of `local_config` to `false`. If modified, the Nebula Graph service will first read the cached configurations, which may cause configuration inconsistencies between clusters and cause unknown risks. + * It is not recommended to modify the value of `local_config` to `false`. If modified, the NebulaGraph service will first read the cached configurations, which may cause configuration inconsistencies between clusters and cause unknown risks. * It is not recommended to modify the configurations that are not introduced in this topic, unless you are familiar with the source code and fully understand the function of configurations. ## How to use the configuration files @@ -13,7 +13,7 @@ To use the initial configuration file, choose one of the above two files and del ## About parameter values -If a parameter is not set in the configuration file, Nebula Graph uses the default value. Not all parameters are predefined. And the predefined parameters in the two initial configuration files are different. This topic uses the parameters in `nebula-metad.conf.default`. For parameters that are not included in `nebula-metad.conf.default`, see `nebula-storaged.conf.production`. +If a parameter is not set in the configuration file, NebulaGraph uses the default value. Not all parameters are predefined. And the predefined parameters in the two initial configuration files are different. This topic uses the parameters in `nebula-metad.conf.default`. For parameters that are not included in `nebula-metad.conf.default`, see `nebula-storaged.conf.production`. !!! Note @@ -27,20 +27,20 @@ For all parameters and their current values, see [Configurations](1.configuratio | :----------- | :----------------------- | :------------------| | `daemonize` | `true` | When set to `true`, the process is a daemon process. | | `pid_file` | `pids/nebula-storaged.pid` | The file that records the process ID. | -| `timezone_name` | - | Specifies the Nebula Graph time zone. This parameter is not predefined in the initial configuration files. The system default value is `UTC+00:00:00`. For the format of the parameter value, see [Specifying the Time Zone with TZ](https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html "Click to view the timezone-related content in the GNU C Library manual"). For example, `--timezone_name=UTC+08:00` represents the GMT+8 time zone. | +| `timezone_name` | - | Specifies the NebulaGraph time zone. This parameter is not predefined in the initial configuration files. The system default value is `UTC+00:00:00`. For the format of the parameter value, see [Specifying the Time Zone with TZ](https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html "Click to view the timezone-related content in the GNU C Library manual"). For example, `--timezone_name=UTC+08:00` represents the GMT+8 time zone. | | `local_config` | `true` | When set to `true`, the process gets configurations from the configuration files. | !!! note - * While inserting property values of [time types](../../3.ngql-guide/3.data-types/4.date-and-time.md), Nebula Graph transforms time types (except TIMESTAMP) to the corresponding UTC according to the time zone specified by `timezone_name`. The time-type values returned by nGQL queries are all UTC. - * `timezone_name` is only used to transform the data stored in Nebula Graph. Other time-related data of the Nebula Graph processes still uses the default time zone of the host, such as the log printing time. + * While inserting property values of [time types](../../3.ngql-guide/3.data-types/4.date-and-time.md), NebulaGraph transforms time types (except TIMESTAMP) to the corresponding UTC according to the time zone specified by `timezone_name`. The time-type values returned by nGQL queries are all UTC. + * `timezone_name` is only used to transform the data stored in NebulaGraph. Other time-related data of the NebulaGraph processes still uses the default time zone of the host, such as the log printing time. ## Logging configurations | Name | Predefined value | Description | | :------------- | :------------------------ | :------------------------------------------------ | | `log_dir` | `logs` | The directory that stores the Meta Service log. It is recommended to put logs on a different hard disk from the data. | -| `minloglevel` | `0` | Specifies the minimum level of the log. That is, no logs below this level will be printed. Optional values are `0` (INFO), `1` (WARNING), `2` (ERROR), `3` (FATAL). It is recommended to set it to `0` during debugging and `1` in a production environment. If it is set to `4`, Nebula Graph will not print any logs. | +| `minloglevel` | `0` | Specifies the minimum level of the log. That is, no logs below this level will be printed. Optional values are `0` (INFO), `1` (WARNING), `2` (ERROR), `3` (FATAL). It is recommended to set it to `0` during debugging and `1` in a production environment. If it is set to `4`, NebulaGraph will not print any logs. | | `v` | `0` | Specifies the detailed level of the log. The larger the value, the more detailed the log is. Optional values are `0`, `1`, `2`, `3`. | | `logbufsecs` | `0` | Specifies the maximum time to buffer the logs. If there is a timeout, it will output the buffered log to the log file. `0` means real-time output. This configuration is measured in seconds. | |`redirect_stdout` | `true` | When set to `true`, the process redirects the`stdout` and `stderr` to separate output files. | @@ -58,7 +58,7 @@ For all parameters and their current values, see [Configurations](1.configuratio | `port` | `9779` | Specifies RPC daemon listening port of the Storage service. The external port for the Meta Service is predefined to `9779`. The internal port is predefined to `9777`, `9778`, and `9780`. Nebula Graph uses the internal port for multi-replica interactions. | | `ws_ip` | `0.0.0.0` | Specifies the IP address for the HTTP service. | | `ws_http_port` | `19779` | Specifies the port for the HTTP service. | -|`heartbeat_interval_secs` | `10` | Specifies the default heartbeat interval. Make sure the `heartbeat_interval_secs` values for all services are the same, otherwise Nebula Graph **CANNOT** work normally. This configuration is measured in seconds. | +|`heartbeat_interval_secs` | `10` | Specifies the default heartbeat interval. Make sure the `heartbeat_interval_secs` values for all services are the same, otherwise NebulaGraph **CANNOT** work normally. This configuration is measured in seconds. | !!! caution @@ -80,7 +80,7 @@ For all parameters and their current values, see [Configurations](1.configuratio | `minimum_reserved_bytes` | `268435456` | Specifies the minimum remaining space of each data storage path. When the value is lower than this standard, the cluster data writing may fail. This configuration is measured in bytes. | | `rocksdb_batch_size` | `4096` | Specifies the block cache for a batch operation. The configuration is measured in bytes. | | `rocksdb_block_cache` | `4` | Specifies the block cache for BlockBasedTable. The configuration is measured in megabytes.| -|`disable_page_cache` |`false`|Enables or disables the operating system's page cache for Nebula Graph. By default, the parameter value is `false` and page cache is enabled. If the value is set to `true`, page cache is disabled and sufficient block cache space must be configured for Nebula Graph.| +|`disable_page_cache` |`false`|Enables or disables the operating system's page cache for NebulaGraph. By default, the parameter value is `false` and page cache is enabled. If the value is set to `true`, page cache is disabled and sufficient block cache space must be configured for NebulaGraph.| | `engine_type` | `rocksdb` | Specifies the engine type. | | `rocksdb_compression` | `lz4` | Specifies the compression algorithm for RocksDB. Optional values are `no`, `snappy`, `lz4`, `lz4hc`, `zlib`, `bzip2`, and `zstd`. | | `rocksdb_compression_per_level` | \ | Specifies the compression algorithm for each level. | @@ -104,7 +104,7 @@ For all parameters and their current values, see [Configurations](1.configuratio !!! caution - The configuration `snapshot` in the following table is different from the snapshot in Nebula Graph. The `snapshot` here refers to the stock data on the leader when synchronizing Raft. + The configuration `snapshot` in the following table is different from the snapshot in NebulaGraph. The `snapshot` here refers to the stock data on the leader when synchronizing Raft. | Name | Predefined value | Description | | :-- | :----- | :--- | @@ -170,7 +170,7 @@ For more information, see [RocksDB official documentation](https://rocksdb.org/) !!! enterpriseonly - Only available for the Nebula Graph Enterprise Edition. + Only available for the NebulaGraph Enterprise Edition. | Name | Predefined value | Description | | :-----------------------------| :------| :------------------------------- | diff --git a/docs-2.0/5.configurations-and-logs/2.log-management/audit-log.md b/docs-2.0/5.configurations-and-logs/2.log-management/audit-log.md index e6bcffe84e1..a95d9d59c16 100644 --- a/docs-2.0/5.configurations-and-logs/2.log-management/audit-log.md +++ b/docs-2.0/5.configurations-and-logs/2.log-management/audit-log.md @@ -1,10 +1,10 @@ # Audit logs -The Nebula Graph audit logs store all operations received by graph service in categories, then provide the logs for users to track specific types of operations as needed. +The NebulaGraph audit logs store all operations received by graph service in categories, then provide the logs for users to track specific types of operations as needed. !!! enterpriseonly - Only available for the Nebula Graph Enterprise Edition. + Only available for the NebulaGraph Enterprise Edition. ## Log categories @@ -90,7 +90,7 @@ The fields of audit logs are the same for different handlers and formats. For ex |`CONNECTION_ID`| The session ID of the connection. | |`CONNECTION_STATUS`| The status of the connection. `0` indicates success, and other numbers indicate different error messages.| |`CONNECTION_MESSAGE`| An error message is displayed when the connection fails.| -|`USER`| The user name of the Nebula Graph connection. | +|`USER`| The user name of the NebulaGraph connection. | |`CLIENT_HOST`| The IP address of the client.| |`HOST`| The IP address of the host. | |`SPACE`| The graph space where you perform queries.| diff --git a/docs-2.0/5.configurations-and-logs/2.log-management/logs.md b/docs-2.0/5.configurations-and-logs/2.log-management/logs.md index 652dec74401..5eb3d40f920 100644 --- a/docs-2.0/5.configurations-and-logs/2.log-management/logs.md +++ b/docs-2.0/5.configurations-and-logs/2.log-management/logs.md @@ -2,17 +2,17 @@ Runtime logs are provided for DBAs and developers to locate faults when the system fails. -**Nebula Graph** uses [glog](https://github.com/google/glog) to print runtime logs, uses [gflags](https://gflags.github.io/gflags/) to control the severity level of the log, and provides an HTTP interface to dynamically change the log level at runtime to facilitate tracking. +**NebulaGraph** uses [glog](https://github.com/google/glog) to print runtime logs, uses [gflags](https://gflags.github.io/gflags/) to control the severity level of the log, and provides an HTTP interface to dynamically change the log level at runtime to facilitate tracking. ## Log directory The default runtime log directory is `/usr/local/nebula/logs/`. -If the log directory is deleted while Nebula Graph is running, the log would not continue to be printed. However, this operation will not affect the services. To recover the logs, restart the services. +If the log directory is deleted while NebulaGraph is running, the log would not continue to be printed. However, this operation will not affect the services. To recover the logs, restart the services. ## Parameter descriptions -- `minloglevel`: Specifies the minimum level of the log. That is, no logs below this level will be printed. Optional values are `0` (INFO), `1` (WARNING), `2` (ERROR), `3` (FATAL). It is recommended to set it to `0` during debugging and `1` in a production environment. If it is set to `4`, Nebula Graph will not print any logs. +- `minloglevel`: Specifies the minimum level of the log. That is, no logs below this level will be printed. Optional values are `0` (INFO), `1` (WARNING), `2` (ERROR), `3` (FATAL). It is recommended to set it to `0` during debugging and `1` in a production environment. If it is set to `4`, NebulaGraph will not print any logs. - `v`: Specifies the detailed level of the log. The larger the value, the more detailed the log is. Optional values are `0`, `1`, `2`, `3`. @@ -69,7 +69,7 @@ $ curl -X PUT -H "Content-Type: application/json" -d '{"minloglevel":0,"v":3}' " ``` -If the log level is changed while Nebula Graph is running, it will be restored to the level set in the configuration file after restarting the service. To permanently modify it, see [Configuration files](../1.configurations/1.configurations.md). +If the log level is changed while NebulaGraph is running, it will be restored to the level set in the configuration file after restarting the service. To permanently modify it, see [Configuration files](../1.configurations/1.configurations.md). ## RocksDB runtime logs diff --git a/docs-2.0/6.monitor-and-metrics/1.query-performance-metrics.md b/docs-2.0/6.monitor-and-metrics/1.query-performance-metrics.md index 8054c81412b..d7fb4a58d56 100644 --- a/docs-2.0/6.monitor-and-metrics/1.query-performance-metrics.md +++ b/docs-2.0/6.monitor-and-metrics/1.query-performance-metrics.md @@ -1,10 +1,10 @@ -# Query Nebula Graph metrics +# Query NebulaGraph metrics -Nebula Graph supports querying the monitoring metrics through HTTP ports. +NebulaGraph supports querying the monitoring metrics through HTTP ports. ## Metrics structure -Each metric of Nebula Graph consists of three fields: name, type, and time range. The fields are separated by periods, for example, `num_queries.sum.600`. Different Nebula Graph services (Graph, Storage, or Meta) support different metrics. The detailed description is as follows. +Each metric of NebulaGraph consists of three fields: name, type, and time range. The fields are separated by periods, for example, `num_queries.sum.600`. Different NebulaGraph services (Graph, Storage, or Meta) support different metrics. The detailed description is as follows. |Field|Example|Description| |-|-|-| @@ -16,7 +16,7 @@ Each metric of Nebula Graph consists of three fields: name, type, and time range The Graph service supports a set of space-level metrics that record the information of different graph spaces separately. -To enable space-level metrics, set the value of `enable_space_level_metrics` to `true` in the Graph service configuration file before starting Nebula Graph. For details about how to modify the configuration, see [Configuration Management](../5.configurations-and-logs/1.configurations/1.configurations.md). +To enable space-level metrics, set the value of `enable_space_level_metrics` to `true` in the Graph service configuration file before starting NebulaGraph. For details about how to modify the configuration, see [Configuration Management](../5.configurations-and-logs/1.configurations/1.configurations.md). !!! note @@ -39,7 +39,7 @@ curl -G "http://:/stats?stats= [&format=json]" !!! note - If Nebula Graph is deployed with [Docker Compose](..//4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md), run `docker-compose ps` to check the ports that are mapped from the service ports inside of the container and then query through them. + If NebulaGraph is deployed with [Docker Compose](..//4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md), run `docker-compose ps` to check the ports that are mapped from the service ports inside of the container and then query through them. ### Examples @@ -77,7 +77,7 @@ curl -G "http://:/stats?stats= [&format=json]" * Query all metrics in a service. - If no metric is specified in the query, Nebula Graph returns all metrics in the service. + If no metric is specified in the query, NebulaGraph returns all metrics in the service. ```bash $ curl -G "http://192.168.8.40:19559/stats" diff --git a/docs-2.0/6.monitor-and-metrics/2.rocksdb-statistics.md b/docs-2.0/6.monitor-and-metrics/2.rocksdb-statistics.md index ca542422aa4..ab6a356fb8e 100644 --- a/docs-2.0/6.monitor-and-metrics/2.rocksdb-statistics.md +++ b/docs-2.0/6.monitor-and-metrics/2.rocksdb-statistics.md @@ -1,6 +1,6 @@ # RocksDB statistics -Nebula Graph uses RocksDB as the underlying storage. This topic describes how to collect and show the RocksDB statistics of Nebula Graph. +NebulaGraph uses RocksDB as the underlying storage. This topic describes how to collect and show the RocksDB statistics of NebulaGraph. ## Enable RocksDB diff --git a/docs-2.0/7.data-security/1.authentication/1.authentication.md b/docs-2.0/7.data-security/1.authentication/1.authentication.md index 4068a46c5c7..59484e9a9e5 100644 --- a/docs-2.0/7.data-security/1.authentication/1.authentication.md +++ b/docs-2.0/7.data-security/1.authentication/1.authentication.md @@ -1,18 +1,18 @@ # Authentication -Nebula Graph replies on local authentication or LDAP authentication to implement access control. +NebulaGraph replies on local authentication or LDAP authentication to implement access control. -Nebula Graph creates a session when a client connects to it. The session stores information about the connection, including the user information. If the authentication system is enabled, the session will be mapped to corresponding users. +NebulaGraph creates a session when a client connects to it. The session stores information about the connection, including the user information. If the authentication system is enabled, the session will be mapped to corresponding users. !!! Note - By default, the authentication is disabled and Nebula Graph allows connections with the username `root` and any password. + By default, the authentication is disabled and NebulaGraph allows connections with the username `root` and any password. -Nebula Graph supports local authentication and LDAP authentication. +NebulaGraph supports local authentication and LDAP authentication. ## Local authentication -Local authentication indicates that usernames and passwords are stored locally on the server, with the passwords encrypted. Users will be authenticated when trying to visit Nebula Graph. +Local authentication indicates that usernames and passwords are stored locally on the server, with the passwords encrypted. Users will be authenticated when trying to visit NebulaGraph. ### Enable local authentication @@ -24,15 +24,15 @@ Local authentication indicates that usernames and passwords are stored locally o - `--password_lock_time_in_secs`: This parameter is optional, and you need to add this parameter manually. Specify the time how long your account is locked after multiple incorrect password entries are entered. Unit: second. -2. Restart the Nebula Graph services. For how to restart, see [Manage Nebula Graph services](../../2.quick-start/5.start-stop-service.md). +2. Restart the NebulaGraph services. For how to restart, see [Manage NebulaGraph services](../../2.quick-start/5.start-stop-service.md). !!! note - You can use the username `root` and password `nebula` to log into Nebula Graph after enabling local authentication. This account has the build-in God role. For more information about roles, see [Roles and privileges](3.role-list.md). + You can use the username `root` and password `nebula` to log into NebulaGraph after enabling local authentication. This account has the build-in God role. For more information about roles, see [Roles and privileges](3.role-list.md). ## LDAP authentication -Lightweight Directory Access Protocol (LDAP) is a lightweight client-server protocol for accessing directories and building a centralized account management system. LDAP authentication and local authentication can be enabled at the same time, but LDAP authentication has a higher priority. If the local authentication server and the LDAP server both have the information of user `Amber`, Nebula Graph reads from the LDAP server first. +Lightweight Directory Access Protocol (LDAP) is a lightweight client-server protocol for accessing directories and building a centralized account management system. LDAP authentication and local authentication can be enabled at the same time, but LDAP authentication has a higher priority. If the local authentication server and the LDAP server both have the information of user `Amber`, NebulaGraph reads from the LDAP server first. ### Enable LDAP authentication diff --git a/docs-2.0/7.data-security/1.authentication/2.management-user.md b/docs-2.0/7.data-security/1.authentication/2.management-user.md index a81df558140..59f62590c4c 100644 --- a/docs-2.0/7.data-security/1.authentication/2.management-user.md +++ b/docs-2.0/7.data-security/1.authentication/2.management-user.md @@ -1,12 +1,12 @@ # User management -User management is an indispensable part of Nebula Graph access control. This topic describes how to manage users and roles. +User management is an indispensable part of NebulaGraph access control. This topic describes how to manage users and roles. -After [enabling authentication](1.authentication.md), only valid users can connect to Nebula Graph and access the resources according to the [user roles](3.role-list.md). +After [enabling authentication](1.authentication.md), only valid users can connect to NebulaGraph and access the resources according to the [user roles](3.role-list.md). !!! Note - * By default, the authentication is disabled. Nebula Graph allows connections with the username `root` and any password. + * By default, the authentication is disabled. NebulaGraph allows connections with the username `root` and any password. * Once the role of a user is modified, the user has to re-login to make the new role takes effect. @@ -23,7 +23,7 @@ The `root` user with the **GOD** role can run `CREATE USER` to create a new user - `IF NOT EXISTS`: Detects if the user name exists. The user will be created only if the user name does not exist. - `user_name`: Sets the name of the user. - `password`: Sets the password of the user. - - `ip_list`(Enterprise): Sets the IP address whitelist. The user can connect to Nebula Graph only from IP addresses in the list. Use commas to separate multiple IP addresses. + - `ip_list`(Enterprise): Sets the IP address whitelist. The user can connect to NebulaGraph only from IP addresses in the list. Use commas to separate multiple IP addresses. - Example @@ -42,7 +42,7 @@ The `root` user with the **GOD** role can run `CREATE USER` to create a new user ## GRANT ROLE -Users with the **GOD** role or the **ADMIN** role can run `GRANT ROLE` to assign a built-in role in a graph space to a user. For more information about Nebula Graph built-in roles, see [Roles and privileges](3.role-list.md). +Users with the **GOD** role or the **ADMIN** role can run `GRANT ROLE` to assign a built-in role in a graph space to a user. For more information about NebulaGraph built-in roles, see [Roles and privileges](3.role-list.md). * Syntax @@ -58,7 +58,7 @@ Users with the **GOD** role or the **ADMIN** role can run `GRANT ROLE` to assign ## REVOKE ROLE -Users with the **GOD** role or the **ADMIN** role can run `REVOKE ROLE` to revoke the built-in role of a user in a graph space. For more information about Nebula Graph built-in roles, see [Roles and privileges](3.role-list.md). +Users with the **GOD** role or the **ADMIN** role can run `REVOKE ROLE` to revoke the built-in role of a user in a graph space. For more information about NebulaGraph built-in roles, see [Roles and privileges](3.role-list.md). * Syntax @@ -145,7 +145,7 @@ The `root` user with the **GOD** role can run `ALTER USER` to set a new password !!! enterpriseonly - When `WITH IP WHITELIST` is not used, the IP address whitelist is removed and the user can connect to the Nebula Graph by any IP address. + When `WITH IP WHITELIST` is not used, the IP address whitelist is removed and the user can connect to the NebulaGraph by any IP address. ```ngql nebula> ALTER USER user2 WITH PASSWORD 'nebula'; diff --git a/docs-2.0/7.data-security/1.authentication/3.role-list.md b/docs-2.0/7.data-security/1.authentication/3.role-list.md index b2a90d07a3f..35778422468 100644 --- a/docs-2.0/7.data-security/1.authentication/3.role-list.md +++ b/docs-2.0/7.data-security/1.authentication/3.role-list.md @@ -4,7 +4,7 @@ A role is a collection of privileges. You can assign a role to a [user](2.manage ## Built-in roles -Nebula Graph does not support custom roles, but it has multiple built-in roles: +NebulaGraph does not support custom roles, but it has multiple built-in roles: * GOD @@ -47,7 +47,7 @@ Nebula Graph does not support custom roles, but it has multiple built-in roles: !!! note - * Nebula Graph does not support custom roles. Users can only use the default built-in roles. + * NebulaGraph does not support custom roles. Users can only use the default built-in roles. * A user can have only one role in a graph space. For authenticated users, see [User management](2.management-user.md). ## Role privileges and allowed nGQL diff --git a/docs-2.0/7.data-security/1.authentication/4.ldap.md b/docs-2.0/7.data-security/1.authentication/4.ldap.md index dfeffe27209..452c7c9908d 100644 --- a/docs-2.0/7.data-security/1.authentication/4.ldap.md +++ b/docs-2.0/7.data-security/1.authentication/4.ldap.md @@ -1,6 +1,6 @@ # OpenLDAP authentication -This topic introduces how to connect Nebula Graph to the OpenLDAP server and use the DN (Distinguished Name) and password defined in OpenLDAP for authentication. +This topic introduces how to connect NebulaGraph to the OpenLDAP server and use the DN (Distinguished Name) and password defined in OpenLDAP for authentication. !!! enterpriseonly @@ -8,7 +8,7 @@ This topic introduces how to connect Nebula Graph to the OpenLDAP server and use ## Authentication method -After the OpenLDAP authentication is enabled and users log into Nebula Graph with the account and password, Nebula Graph checks whether the login account exists in the Meta service. If the account exists, Nebula Graph finds the corresponding DN in OpenLDAP according to the authentication method and verifies the password. +After the OpenLDAP authentication is enabled and users log into NebulaGraph with the account and password, NebulaGraph checks whether the login account exists in the Meta service. If the account exists, NebulaGraph finds the corresponding DN in OpenLDAP according to the authentication method and verifies the password. OpenLDAP supports two authentication methods: simple bind authentication (SimpleBindAuth) and search bind authentication (SearchBindAuth). @@ -32,7 +32,7 @@ Search bind authentication reads the Graph service configuration information and Take the existing account `test2` and password `passwdtest2` on OpenLDAP as an example. -1. [Connect to Nebula Graph](../../4.deployment-and-installation/connect-to-nebula-graph.md), create and authorize the shadow account `test2` corresponding to OpenLDAP. +1. [Connect to NebulaGraph](../../4.deployment-and-installation/connect-to-nebula-graph.md), create and authorize the shadow account `test2` corresponding to OpenLDAP. ```ngql nebula> CREATE USER test2 WITH PASSWORD ''; @@ -41,7 +41,7 @@ Take the existing account `test2` and password `passwdtest2` on OpenLDAP as an e !!! note - When creating an account in Nebula Graph, the password can be set arbitrarily. + When creating an account in NebulaGraph, the password can be set arbitrarily. 2. Edit the configuration file `nebula-graphd.conf` (The default path is`/usr/local/nebula/etc/`): @@ -85,7 +85,7 @@ Take the existing account `test2` and password `passwdtest2` on OpenLDAP as an e --ldap_basedn=ou=it,dc=sys,dc=com ``` -3. [Restart Nebula Graph services](../../4.deployment-and-installation/manage-service.md) to make the new configuration valid. +3. [Restart NebulaGraph services](../../4.deployment-and-installation/manage-service.md) to make the new configuration valid. 4. Run the login test. @@ -93,7 +93,7 @@ Take the existing account `test2` and password `passwdtest2` on OpenLDAP as an e $ ./nebula-console --addr 127.0.0.1 --port 9669 -u test2 -p passwdtest2 2021/09/08 03:49:39 [INFO] connection pool is initialized successfully - Welcome to Nebula Graph! + Welcome to NebulaGraph! ``` !!! note diff --git a/docs-2.0/7.data-security/4.ssl.md b/docs-2.0/7.data-security/4.ssl.md index a25ec1a758d..3eb764a4da6 100644 --- a/docs-2.0/7.data-security/4.ssl.md +++ b/docs-2.0/7.data-security/4.ssl.md @@ -1,6 +1,6 @@ # SSL encryption -Nebula Graph supports data transmission with SSL encryption between clients, the Graph service, the Meta service, and the Storage service. This topic describes how to enable SSL encryption. +NebulaGraph supports data transmission with SSL encryption between clients, the Graph service, the Meta service, and the Storage service. This topic describes how to enable SSL encryption. ## Precaution @@ -20,7 +20,7 @@ Enabling SSL encryption will slightly affect the performance, such as causing op ## Certificate modes -To use SSL encryption, SSL certificates are required. Nebula Graph supports two certificate modes. +To use SSL encryption, SSL certificates are required. NebulaGraph supports two certificate modes. - Self-signed certificate mode @@ -32,7 +32,7 @@ To use SSL encryption, SSL certificates are required. Nebula Graph supports two ## Encryption policies -Nebula Graph supports three encryption policies. For details, see [Usage explanation](https://github.com/vesoft-inc/nebula/blob/a67d166b284cae1b534bf8d19c936ee38bf12e29/docs/rfcs/0001-ssl-transportation.md#usage-explanation). +NebulaGraph supports three encryption policies. For details, see [Usage explanation](https://github.com/vesoft-inc/nebula/blob/a67d166b284cae1b534bf8d19c936ee38bf12e29/docs/rfcs/0001-ssl-transportation.md#usage-explanation). - Encrypt the data transmission between clients, the Graph service, the Meta service, and the Storage service. diff --git a/docs-2.0/8.service-tuning/2.graph-modeling.md b/docs-2.0/8.service-tuning/2.graph-modeling.md index 1d80e10afbe..1c067f7f55b 100644 --- a/docs-2.0/8.service-tuning/2.graph-modeling.md +++ b/docs-2.0/8.service-tuning/2.graph-modeling.md @@ -1,10 +1,10 @@ # Graph data modeling suggestions -This topic provides general suggestions for modeling data in Nebula Graph. +This topic provides general suggestions for modeling data in NebulaGraph. !!! note - The following suggestions may not apply to some special scenarios. In these cases, find help in the [Nebula Graph community](https://discuss.nebula-graph.io/). + The following suggestions may not apply to some special scenarios. In these cases, find help in the [NebulaGraph community](https://discuss.nebula-graph.io/). ## Model for performance @@ -16,15 +16,15 @@ Usually, various types of queries are validated in test scenarios to assess the ### Full-graph scanning avoidance -Graph traversal can be performed after one or more vertices/edges are located through property indexes or VIDs. But for some query patterns, such as subgraph and path query patterns, the source vertex or edge of the traversal cannot be located through property indexes or VIDs. These queries find all the subgraphs that satisfy the query pattern by scanning the whole graph space which will have poor query performance. Nebula Graph does not implement indexing for the graph structures of subgraphs or paths. +Graph traversal can be performed after one or more vertices/edges are located through property indexes or VIDs. But for some query patterns, such as subgraph and path query patterns, the source vertex or edge of the traversal cannot be located through property indexes or VIDs. These queries find all the subgraphs that satisfy the query pattern by scanning the whole graph space which will have poor query performance. NebulaGraph does not implement indexing for the graph structures of subgraphs or paths. ### No predefined bonds between Tags and Edge types -Define the bonds between Tags and Edge types in the application, not Nebula Graph. There are no statements that could get the bonds between Tags and Edge types. +Define the bonds between Tags and Edge types in the application, not NebulaGraph. There are no statements that could get the bonds between Tags and Edge types. ### Tags/Edge types predefine a set of properties -While creating Tags or Edge types, you need to define a set of properties. Properties are part of the Nebula Graph Schema. +While creating Tags or Edge types, you need to define a set of properties. Properties are part of the NebulaGraph Schema. ### Control changes in the business model and the data model @@ -32,7 +32,7 @@ Changes here refer to changes in business models and data models (meta-informati Some graph databases are designed to be Schema-free, so their data modeling, including the modeling of the graph topology and properties, can be very flexible. Properties can be re-modeled to graph topology, and vice versa. Such systems are often specifically optimized for graph topology access. -Nebula Graph {{ nebula.release }} is a strong-Schema (row storage) system, which means that the business model should not change frequently. For example, the property Schema should not change. It is similar to avoiding `ALTER TABLE` in MySQL. +NebulaGraph {{ nebula.release }} is a strong-Schema (row storage) system, which means that the business model should not change frequently. For example, the property Schema should not change. It is similar to avoiding `ALTER TABLE` in MySQL. On the contrary, vertices and their edges can be added or deleted at low costs. Thus, the easy-to-change part of the business model should be transformed to vertices or edges, rather than properties. @@ -40,7 +40,7 @@ For example, in a business model, people have relatively fixed properties such a ### Set temporary properties through self-loop edges -As a strong Schema system, Nebula Graph does not support List-type properties. And using `ALTER TAG` costs too much. If you need to add some temporary properties or List-type properties to a vertex, you can first create an edge type with the required properties, and then insert one or more edges that direct to the vertex itself. The figure is as follows. +As a strong Schema system, NebulaGraph does not support List-type properties. And using `ALTER TAG` costs too much. If you need to add some temporary properties or List-type properties to a vertex, you can first create an edge type with the required properties, and then insert one or more edges that direct to the vertex itself. The figure is as follows. ![loop property](https://docs-cdn.nebula-graph.com.cn/figures/loop-property.png) @@ -77,13 +77,13 @@ Operations on loops are not encapsulated with any syntactic sugars and you can u A dangling edge is an edge that only connects to a single vertex and only one part of the edge connects to the vertex. -In Nebula Graph {{ nebula.release }}, dangling edges may appear in the following two cases. +In NebulaGraph {{ nebula.release }}, dangling edges may appear in the following two cases. 1. Insert edges with [INSERT EDGE](../3.ngql-guide/13.edge-statements/1.insert-edge.md) statement before the source vertex or the destination vertex exists. 2. Delete vertices with [DELETE VERTEX](../3.ngql-guide/12.vertex-statements/4.delete-vertex.md) statement and the `WITH EDGE` option is not used. At this time, the system does not delete the related outgoing and incoming edges of the vertices. There will be dangling edges by default. -Dangling edges may appear in Nebula Graph {{nebula.release}} as the design allow it to exist. And there is no MERGE statement like openCypher has. The existence of dangling edges depends entirely on the application level. You can use [GO](../3.ngql-guide/7.general-query-statements/3.go.md) and [LOOKUP](../3.ngql-guide/7.general-query-statements/5.lookup.md) statements to find a dangling edge, but cannot use the [MATCH](../3.ngql-guide/7.general-query-statements/2.match.md) statement to find a dangling edge. +Dangling edges may appear in NebulaGraph {{nebula.release}} as the design allow it to exist. And there is no MERGE statement like openCypher has. The existence of dangling edges depends entirely on the application level. You can use [GO](../3.ngql-guide/7.general-query-statements/3.go.md) and [LOOKUP](../3.ngql-guide/7.general-query-statements/5.lookup.md) statements to find a dangling edge, but cannot use the [MATCH](../3.ngql-guide/7.general-query-statements/2.match.md) statement to find a dangling edge. Examples: @@ -123,9 +123,9 @@ Empty set (time spent 3153/3573 us) ### Breadth-first traversal over depth-first traversal -- Nebula Graph has lower performance for depth-first traversal based on the Graph topology, and better performance for breadth-first traversal and obtaining properties. For example, if model A contains properties "name", "age", and "eye color", it is recommended to create a tag `person` and add properties `name`, `age`, and `eye_color` to it. If you create a tag `eye_color` and an edge type `has`, and then create an edge to represent the eye color owned by the person, the traversal performance will not be high. +- NebulaGraph has lower performance for depth-first traversal based on the Graph topology, and better performance for breadth-first traversal and obtaining properties. For example, if model A contains properties "name", "age", and "eye color", it is recommended to create a tag `person` and add properties `name`, `age`, and `eye_color` to it. If you create a tag `eye_color` and an edge type `has`, and then create an edge to represent the eye color owned by the person, the traversal performance will not be high. -- The performance of finding an edge by an edge property is close to that of finding a vertex by a vertex property. For some databases, it is recommended to re-model edge properties as those of the intermediate vertices. For example, model the pattern `(src)-[edge {P1, P2}]->(dst)` as `(src)-[edge1]->(i_node {P1, P2})-[edge2]->(dst)`. With Nebula Graph {{ nebula.release }}, you can use `(src)-[edge {P1, P2}]->(dst)` directly to decrease the depth of the traversal and increase the performance. +- The performance of finding an edge by an edge property is close to that of finding a vertex by a vertex property. For some databases, it is recommended to re-model edge properties as those of the intermediate vertices. For example, model the pattern `(src)-[edge {P1, P2}]->(dst)` as `(src)-[edge1]->(i_node {P1, P2})-[edge2]->(dst)`. With NebulaGraph {{ nebula.release }}, you can use `(src)-[edge {P1, P2}]->(dst)` directly to decrease the depth of the traversal and increase the performance. ### Edge directions @@ -153,13 +153,13 @@ See [VID](../1.introduction/3.vid.md). ### Long texts -Do not use long texts to create edge properties. Edge properties are stored twice and long texts lead to greater write amplification. For how edges properties are stored, see [Storage architecture](../1.introduction/3.nebula-graph-architecture/4.storage-service.md). It is recommended to store long texts in HBase or Elasticsearch and store its address in Nebula Graph. +Do not use long texts to create edge properties. Edge properties are stored twice and long texts lead to greater write amplification. For how edges properties are stored, see [Storage architecture](../1.introduction/3.nebula-graph-architecture/4.storage-service.md). It is recommended to store long texts in HBase or Elasticsearch and store its address in NebulaGraph. ## Dynamic graphs (sequence graphs) are not supported In some scenarios, graphs need to have the time information to describe how the structure of the entire graph changes over time.[^twitter] -The Rank field on Edges in Nebula Graph {{ nebula.release }} can be used to store time in int64, but no field on vertices can do this because if you store the time information as property values, it will be covered by new insertion. Thus Nebula Graph does not support sequence graphs. +The Rank field on Edges in NebulaGraph {{ nebula.release }} can be used to store time in int64, but no field on vertices can do this because if you store the time information as property values, it will be covered by new insertion. Thus NebulaGraph does not support sequence graphs. ![image](https://docs-cdn.nebula-graph.com.cn/figures/sequence.png) diff --git a/docs-2.0/8.service-tuning/3.system-design.md b/docs-2.0/8.service-tuning/3.system-design.md index 65ed519687b..bae122e23d6 100644 --- a/docs-2.0/8.service-tuning/3.system-design.md +++ b/docs-2.0/8.service-tuning/3.system-design.md @@ -2,14 +2,14 @@ ## QPS or low-latency first -- Nebula Graph {{ nebula.release }} is good at handling small requests with high concurrency. In such scenarios, the whole graph is huge, containing maybe trillions of vertices or edges, but the subgraphs accessed by each request are not large (containing millions of vertices or edges), and the latency of a single request is low. The concurrent number of such requests, i.e., the QPS, can be huge. +- NebulaGraph {{ nebula.release }} is good at handling small requests with high concurrency. In such scenarios, the whole graph is huge, containing maybe trillions of vertices or edges, but the subgraphs accessed by each request are not large (containing millions of vertices or edges), and the latency of a single request is low. The concurrent number of such requests, i.e., the QPS, can be huge. - On the other hand, in interactive analysis scenarios, the request concurrency is usually not high, but the subgraphs accessed by each request are large, with thousands of millions of vertices or edges. To lower the latency of big requests in such scenarios, you can split big requests into multiple small requests in the application, and concurrently send them to multiple graphd processes. This can decrease the memory used by each graphd process as well. Besides, you can use [Nebula Algorithm](../nebula-algorithm.md) for such scenarios. ## Data transmission and optimization -- Read/write balance. Nebula Graph fits into OLTP scenarios with balanced read/write, i.e., concurrent write and read. It is not suitable for OLAP scenarios that usually need to write once and read many times. +- Read/write balance. NebulaGraph fits into OLTP scenarios with balanced read/write, i.e., concurrent write and read. It is not suitable for OLAP scenarios that usually need to write once and read many times. - Select different write methods. For large batches of data writing, use SST files. For small batches of data writing, use `INSERT`. - Run `COMPACTION` and `BALANCE` jobs to optimize data format and storage distribution at the right time. -- Nebula Graph {{ nebula.release }} does not support transactions and isolation in the relational database and is closer to NoSQL. +- NebulaGraph {{ nebula.release }} does not support transactions and isolation in the relational database and is closer to NoSQL. ## Query preheating and data preheating diff --git a/docs-2.0/8.service-tuning/4.plan.md b/docs-2.0/8.service-tuning/4.plan.md index 76c9b384c59..530aa7f2486 100644 --- a/docs-2.0/8.service-tuning/4.plan.md +++ b/docs-2.0/8.service-tuning/4.plan.md @@ -1,5 +1,5 @@ # Execution plan -Nebula Graph {{ nebula.release }} applies rule-based execution plans. Users cannot change execution plans, pre-compile queries (and corresponding plan cache), or accelerate queries by specifying indexes. +NebulaGraph {{ nebula.release }} applies rule-based execution plans. Users cannot change execution plans, pre-compile queries (and corresponding plan cache), or accelerate queries by specifying indexes. To view the execution plan and executive summary, see [EXPLAIN and PROFILE](../3.ngql-guide/17.query-tuning-statements/1.explain-and-profile.md). diff --git a/docs-2.0/8.service-tuning/compaction.md b/docs-2.0/8.service-tuning/compaction.md index 13a7c77da14..9b5c50eb5d2 100644 --- a/docs-2.0/8.service-tuning/compaction.md +++ b/docs-2.0/8.service-tuning/compaction.md @@ -2,7 +2,7 @@ This topic gives some information about compaction. -In Nebula Graph, `Compaction` is the most important background process and has an important effect on performance. +In NebulaGraph, `Compaction` is the most important background process and has an important effect on performance. `Compaction` reads the data that is written on the hard disk, then re-organizes the data structure and the indexes, and then writes back to the hard disk. The read performance can increase by times after compaction. Thus, to get high read performance, trigger `compaction` (full `compaction`) manually when writing a large amount of data into Nebula Graph. @@ -10,7 +10,7 @@ In Nebula Graph, `Compaction` is the most important background process and has a Note that `compaction` leads to long-time hard disk IO. We suggest that users do compaction during off-peak hours (for example, early morning). -Nebula Graph has two types of `compaction`: automatic `compaction` and full `compaction`. +NebulaGraph has two types of `compaction`: automatic `compaction` and full `compaction`. ## Automatic `compaction` diff --git a/docs-2.0/8.service-tuning/load-balance.md b/docs-2.0/8.service-tuning/load-balance.md index 648cd43603a..872ee479079 100644 --- a/docs-2.0/8.service-tuning/load-balance.md +++ b/docs-2.0/8.service-tuning/load-balance.md @@ -10,7 +10,7 @@ You can use the `BALANCE` statement to balance the distribution of partitions an !!! enterpriseonly - Only available for the Nebula Graph Enterprise Edition. + Only available for the NebulaGraph Enterprise Edition. !!! note @@ -81,7 +81,7 @@ After you add new storage hosts into the cluster, no partition is deployed on th +-----------------+------+-----------+----------+--------------+----------------------+------------------------+-------------+ ``` -If any subtask fails, run `RECOVER JOB ` to recover the failed jobs. If redoing load balancing does not solve the problem, ask for help in the [Nebula Graph community](https://discuss.nebula-graph.io/). +If any subtask fails, run `RECOVER JOB ` to recover the failed jobs. If redoing load balancing does not solve the problem, ask for help in the [NebulaGraph community](https://discuss.nebula-graph.io/). ### Stop data balancing @@ -103,7 +103,7 @@ To restore a balance job in the `FAILED` or `STOPPED` status, run `RECOVER JOB < !!! note - For a `STOPPED` `BALANCE DATA` job, Nebula Graph detects whether the same type of `FAILED` jobs or `FINISHED` jobs have been created since the start time of the job. If so, the `STOPPED` job cannot be restored. For example, if chronologically there are STOPPED job1, FINISHED job2, and STOPPED Job3, only job3 can be restored, and job1 cannot. + For a `STOPPED` `BALANCE DATA` job, NebulaGraph detects whether the same type of `FAILED` jobs or `FINISHED` jobs have been created since the start time of the job. If so, the `STOPPED` job cannot be restored. For example, if chronologically there are STOPPED job1, FINISHED job2, and STOPPED Job3, only job3 can be restored, and job1 cannot. ### Migrate partition @@ -208,7 +208,7 @@ After you add new storage hosts into the zone, no partition is deployed on the n +------------------+------+-----------+----------+--------------+-----------------------------------+------------------------+---------+ ``` -If any subtask fails, run [`RECOVER JOB `](../synchronization-and-migration/2.balance-syntax.md) to restart the balancing. If redoing load balancing does not solve the problem, ask for help in the [Nebula Graph community](https://discuss.nebula-graph.io/). +If any subtask fails, run [`RECOVER JOB `](../synchronization-and-migration/2.balance-syntax.md) to restart the balancing. If redoing load balancing does not solve the problem, ask for help in the [NebulaGraph community](https://discuss.nebula-graph.io/). ## Stop data balancing @@ -279,4 +279,4 @@ nebula> SHOW HOSTS; !!! caution - In Nebula Graph {{ nebula.release }}, switching leaders will cause a large number of short-term request errors (Storage Error `E_RPC_FAILURE`). For solutions, see [FAQ](../20.appendix/0.FAQ.md). + In NebulaGraph {{ nebula.release }}, switching leaders will cause a large number of short-term request errors (Storage Error `E_RPC_FAILURE`). For solutions, see [FAQ](../20.appendix/0.FAQ.md). diff --git a/docs-2.0/8.service-tuning/practice.md b/docs-2.0/8.service-tuning/practice.md index 0865969b7df..8301f4d1ee2 100644 --- a/docs-2.0/8.service-tuning/practice.md +++ b/docs-2.0/8.service-tuning/practice.md @@ -1,6 +1,6 @@ # Best practices -Nebula Graph is used in a variety of industries. This topic presents a few best practices for using Nebula Graph. For more best practices, see [Blog](https://nebula-graph.io/posts/). +NebulaGraph is used in a variety of industries. This topic presents a few best practices for using NebulaGraph. For more best practices, see [Blog](https://nebula-graph.io/posts/). ## Scenarios @@ -12,22 +12,22 @@ Nebula Graph is used in a variety of industries. This topic presents a few best ## Kernel -- [Nebula Graph Source Code Explained: Variable-Length Pattern Matching](https://nebula-graph.io/posts/nebula-graph-source-code-reading-06/) +- [NebulaGraph Source Code Explained: Variable-Length Pattern Matching](https://nebula-graph.io/posts/nebula-graph-source-code-reading-06/) -- [Adding a Test Case for Nebula Graph](https://nebula-graph.io/posts/add-test-case-nebula-graph/) +- [Adding a Test Case for NebulaGraph](https://nebula-graph.io/posts/add-test-case-nebula-graph/) -- [BDD-Based Integration Testing Framework for Nebula Graph: Part Ⅰ](https://nebula-graph.io/posts/bdd-testing-practice/) +- [BDD-Based Integration Testing Framework for NebulaGraph: Part Ⅰ](https://nebula-graph.io/posts/bdd-testing-practice/) -- [BDD-Based Integration Testing Framework for Nebula Graph: Part II](https://nebula-graph.io/posts/bdd-testing-practice-volume-2/) +- [BDD-Based Integration Testing Framework for NebulaGraph: Part II](https://nebula-graph.io/posts/bdd-testing-practice-volume-2/) -- [Understanding Subgraph in Nebula Graph](https://nebula-graph.io/posts/nebula-graph-subgraph-introduction/) +- [Understanding Subgraph in NebulaGraph](https://nebula-graph.io/posts/nebula-graph-subgraph-introduction/) -- [Full-Text Indexing in Nebula Graph](https://nebula-graph.io/posts/how-fulltext-index-works/) +- [Full-Text Indexing in NebulaGraph](https://nebula-graph.io/posts/how-fulltext-index-works/) ## Ecosystem tool - [Validating Import Performance of Nebula Importer](https://nebula-graph.io/posts/nebula-importer-practice/) -- [Ecosystem Tools: Nebula Graph Dashboard for Monitoring](https://nebula-graph.io/posts/what-is-nebula-dashboard/) +- [Ecosystem Tools: NebulaGraph Dashboard for Monitoring](https://nebula-graph.io/posts/what-is-nebula-dashboard/) - [Visualizing Graph Data with Nebula Explorer](https://nebula-graph.io/posts/what-is-nebula-explorer/) diff --git a/docs-2.0/8.service-tuning/super-node.md b/docs-2.0/8.service-tuning/super-node.md index 0c4619c37a9..b29f36634f1 100644 --- a/docs-2.0/8.service-tuning/super-node.md +++ b/docs-2.0/8.service-tuning/super-node.md @@ -6,7 +6,7 @@ In graph theory, a super vertex, also known as a dense vertex, is a vertex with Super vertices are very common because of the power-law distribution. For example, popular leaders in social networks (Internet celebrities), top stocks in the stock market, Big Four in the banking system, hubs in transportation networks, websites with high clicking rates on the Internet, and best sellers in E-commerce. -In Nebula Graph {{ nebula.release }}, a `vertex` and its `properties` form a `key-value pair`, with its `VID` and other meta information as the `key`. Its `Out-Edge Key-Value` and `In-Edge Key-Value` are stored in [the same partition](../1.introduction/3.nebula-graph-architecture/4.storage-service.md) in the form of LSM-trees in hard disks and caches. +In NebulaGraph {{ nebula.release }}, a `vertex` and its `properties` form a `key-value pair`, with its `VID` and other meta information as the `key`. Its `Out-Edge Key-Value` and `In-Edge Key-Value` are stored in [the same partition](../1.introduction/3.nebula-graph-architecture/4.storage-service.md) in the form of LSM-trees in hard disks and caches. Therefore, `directed traversals from this vertex` and `directed traversals ending at this vertex` both involve either `a large number of sequential IO scans` (ideally, after [Compaction](../8.service-tuning/compaction.md) or a large number of `random IO` (frequent writes to `the vertex` and its `ingoing and outgoing edges`). @@ -14,15 +14,15 @@ As a rule of thumb, a vertex is considered dense when the number of its edges ex !!! Note - In Nebula Graph {{ nebula.release }}, there is not any data structure to store the out/in degree for each vertex. Therefore, there is no direct method to know whether it is a super vertex or not. You can try to use Spark to count the degrees periodically. + In NebulaGraph {{ nebula.release }}, there is not any data structure to store the out/in degree for each vertex. Therefore, there is no direct method to know whether it is a super vertex or not. You can try to use Spark to count the degrees periodically. ### Indexes for duplicate properties In a property graph, there is another class of cases similar to super vertices: **a property has a very high duplication rate**, i.e., many vertices with the same `tag` but different `VIDs` have identical property and property values. -Property indexes in Nebula Graph {{ nebula.release }} are designed to reuse the functionality of RocksDB in the Storage Service, in which case indexes are modeled as `keys with the same prefix`. If the lookup of a property fails to hit the cache, it is processed as a random seek and a sequential prefix scan on the hard disk to find the corresponding VID. After that, the graph is usually traversed from this vertex, so that another random read and sequential scan for the corresponding key-value of this vertex will be triggered. The higher the duplication rate, the larger the scan range. +Property indexes in NebulaGraph {{ nebula.release }} are designed to reuse the functionality of RocksDB in the Storage Service, in which case indexes are modeled as `keys with the same prefix`. If the lookup of a property fails to hit the cache, it is processed as a random seek and a sequential prefix scan on the hard disk to find the corresponding VID. After that, the graph is usually traversed from this vertex, so that another random read and sequential scan for the corresponding key-value of this vertex will be triggered. The higher the duplication rate, the larger the scan range. -For more information about property indexes, see [How indexing works in Nebula Graph](https://nebula-graph.io/posts/how-indexing-works-in-nebula-graph/). +For more information about property indexes, see [How indexing works in NebulaGraph](https://nebula-graph.io/posts/how-indexing-works-in-nebula-graph/). Usually, special design and processing are required when the number of duplicate property values exceeds 10,000. diff --git a/docs-2.0/README.md b/docs-2.0/README.md index 3146ded876a..c5f536eac6f 100644 --- a/docs-2.0/README.md +++ b/docs-2.0/README.md @@ -1,18 +1,18 @@ -# Welcome to Nebula Graph {{ nebula.release }} Documentation +# Welcome to NebulaGraph {{ nebula.release }} Documentation !!! caution - The documents of this version are for Nebula Graph Enterprise Edition {{ nebula.release }}, Nebula Graph Community Edition {{ nebula.release }}, and the corresponding tools. For details, see [Release notes](20.appendix/release-note.md). + The documents of this version are for NebulaGraph Enterprise Edition {{ nebula.release }}, NebulaGraph Community Edition {{ nebula.release }}, and the corresponding tools. For details, see [Release notes](20.appendix/release-note.md). !!! note This manual is revised on {{ now().year }}-{{ now().month }}-{{ now().day }}, with GitHub commit [{{ git.short_commit }}](https://github.com/vesoft-inc/nebula-docs/commits/v{{nebula.release}}). -Nebula Graph is a distributed, scalable, and lightning-fast graph database. It is the optimal solution in the world capable of hosting graphs with dozens of billions of vertices (nodes) and trillions of edges (relationships) with millisecond latency. +NebulaGraph is a distributed, scalable, and lightning-fast graph database. It is the optimal solution in the world capable of hosting graphs with dozens of billions of vertices (nodes) and trillions of edges (relationships) with millisecond latency. ## Getting started -* [Learning path](20.appendix/learning-path.md) & [Get Nebula Graph Certifications](https://academic.nebula-graph.io/?lang=EN_US) +* [Learning path](20.appendix/learning-path.md) & [Get NebulaGraph Certifications](https://academic.nebula-graph.io/?lang=EN_US) * [What is Nebula Graph](1.introduction/1.what-is-nebula-graph.md) * [Quick start](2.quick-start/1.quick-start-workflow.md) * [Preparations before deployment](4.deployment-and-installation/1.resource-preparations.md) @@ -22,7 +22,7 @@ Nebula Graph is a distributed, scalable, and lightning-fast graph database. It i ## Other Sources -- [Nebula Graph Homepage](https://nebula-graph.io/) +- [NebulaGraph Homepage](https://nebula-graph.io/) - [Release notes](20.appendix/release-note.md) - [Forum](https://discuss.nebula-graph.io/) - [Blogs](https://nebula-graph.io/posts/) @@ -63,8 +63,8 @@ This manual has over 80 compatibilities and corresponding tips. !!! enterpriseonly - Differences between the Nebula Graph Community and Enterprise editions. + Differences between the NebulaGraph Community and Enterprise editions. ## Modify errors -This Nebula Graph manual is written in the Markdown language. Users can click the pencil sign on the upper right side of each document title and modify errors. +This NebulaGraph manual is written in the Markdown language. Users can click the pencil sign on the upper right side of each document title and modify errors. diff --git a/docs-2.0/backup-and-restore/3.manage-snapshot.md b/docs-2.0/backup-and-restore/3.manage-snapshot.md index c8b81b4a613..79aa2982163 100644 --- a/docs-2.0/backup-and-restore/3.manage-snapshot.md +++ b/docs-2.0/backup-and-restore/3.manage-snapshot.md @@ -1,10 +1,10 @@ # Backup and restore data with snapshots -Nebula Graph supports using snapshots to back up and restore data. When data loss or misoperation occurs, the data will be restored through the snapshot. +NebulaGraph supports using snapshots to back up and restore data. When data loss or misoperation occurs, the data will be restored through the snapshot. ## Prerequisites -Nebula Graph [authentication](../7.data-security/1.authentication/1.authentication.md) is disabled by default. In this case, all users can use the snapshot feature. +NebulaGraph [authentication](../7.data-security/1.authentication/1.authentication.md) is disabled by default. In this case, all users can use the snapshot feature. If authentication is enabled, only the GOD role user can use the snapshot feature. For more information about roles, see [Roles and privileges](../7.data-security/1.authentication/3.role-list.md). @@ -12,13 +12,13 @@ If authentication is enabled, only the GOD role user can use the snapshot featur * To prevent data loss, create a snapshot as soon as the system structure changes, for example, after operations such as `ADD HOST`, `DROP HOST`, `CREATE SPACE`, `DROP SPACE`, and `BALANCE` are performed. -* Nebula Graph cannot automatically delete the invalid files created by a failed snapshot task. You have to manually delete them by using [`DROP SNAPSHOT`](#delete_snapshots). +* NebulaGraph cannot automatically delete the invalid files created by a failed snapshot task. You have to manually delete them by using [`DROP SNAPSHOT`](#delete_snapshots). * Customizing the storage path for snapshots is not supported for now. The default path is `/usr/local/nebula/data`. ## Snapshot form and path -Nebula Graph snapshots are stored in the form of directories with names like `SNAPSHOT_2021_03_09_08_43_12`. The suffix `2021_03_09_08_43_12` is generated automatically based on the creation time (UTC). +NebulaGraph snapshots are stored in the form of directories with names like `SNAPSHOT_2021_03_09_08_43_12`. The suffix `2021_03_09_08_43_12` is generated automatically based on the creation time (UTC). When a snapshot is created, snapshot directories will be automatically created in the `checkpoints` directory on the leader Meta server and each Storage server. @@ -34,7 +34,7 @@ $ find |grep 'SNAPSHOT_2021_03_09_08_43_12' ## Create snapshots -Run `CREATE SNAPSHOT` to create a snapshot for all the graph spaces based on the current time for Nebula Graph. Creating a snapshot for a specific graph space is not supported yet. +Run `CREATE SNAPSHOT` to create a snapshot for all the graph spaces based on the current time for NebulaGraph. Creating a snapshot for a specific graph space is not supported yet. !!! note @@ -111,5 +111,5 @@ Currently, there is no command to restore data with snapshots. You need to manua ## Related documents -Besides snapshots, users can also use Backup&Restore (BR) to backup or restore Nebula Graph data. For more information, see [Backup&Restore](2.backup-restore/1.what-is-br.md). +Besides snapshots, users can also use Backup&Restore (BR) to backup or restore NebulaGraph data. For more information, see [Backup&Restore](2.backup-restore/1.what-is-br.md). --> diff --git a/docs-2.0/backup-and-restore/nebula-br/1.what-is-br.md b/docs-2.0/backup-and-restore/nebula-br/1.what-is-br.md index 0ae01faf9ad..dadc2711aab 100644 --- a/docs-2.0/backup-and-restore/nebula-br/1.what-is-br.md +++ b/docs-2.0/backup-and-restore/nebula-br/1.what-is-br.md @@ -1,6 +1,6 @@ # What is Backup & Restore -Backup & Restore (BR for short) is a Command-Line Interface (CLI) tool to back up data of graph spaces of Nebula Graph and to restore data from the backup files. +Backup & Restore (BR for short) is a Command-Line Interface (CLI) tool to back up data of graph spaces of NebulaGraph and to restore data from the backup files. ## Features @@ -10,12 +10,12 @@ The BR has the following features. It supports: - Restoring data in the following backup file types: - Local Disk (SSD or HDD). It is recommend to use local disk in test environment only. - Amazon S3 compatible interface, such as Alibaba Cloud OSS, MinIO,Ceph RGW, etc. -- Backing up and restoring the entire Nebula Graph cluster. +- Backing up and restoring the entire NebulaGraph cluster. - Backing up data of specified graph spaces (experimental). ## Limitations -- Supports Nebula Graph v{{ nebula.release }} only. +- Supports NebulaGraph v{{ nebula.release }} only. - Supports full backup, but not incremental backup. - Currently, Nebula Listener and full-text indexes do not support backup. - Backup and restore are supported when there is only one metad process configured in the local file. diff --git a/docs-2.0/backup-and-restore/nebula-br/3.br-backup-data.md b/docs-2.0/backup-and-restore/nebula-br/3.br-backup-data.md index 1eb484c723a..6cd067ea4a4 100644 --- a/docs-2.0/backup-and-restore/nebula-br/3.br-backup-data.md +++ b/docs-2.0/backup-and-restore/nebula-br/3.br-backup-data.md @@ -8,7 +8,7 @@ To back up data with the BR, do a check of these: - [Install BR and Agent](2.compile-br.md) and run Agent on each host in the cluster. -- The Nebula Graph services are running. +- The NebulaGraph services are running. - If you store the backup files locally, create a directory with the same absolute path on the meta servers, the storage servers, and the BR machine for the backup files and get the absolute path. Make sure the account has write privileges for this directory. @@ -65,4 +65,4 @@ The parameters are as follows. ## Next to do -After the backup files are generated, you can use the BR to restore them for Nebula Graph. For more information, see [Use BR to restore data](4.br-restore-data.md). +After the backup files are generated, you can use the BR to restore them for NebulaGraph. For more information, see [Use BR to restore data](4.br-restore-data.md). diff --git a/docs-2.0/backup-and-restore/nebula-br/4.br-restore-data.md b/docs-2.0/backup-and-restore/nebula-br/4.br-restore-data.md index 71eecd63de0..8881d01dbfe 100644 --- a/docs-2.0/backup-and-restore/nebula-br/4.br-restore-data.md +++ b/docs-2.0/backup-and-restore/nebula-br/4.br-restore-data.md @@ -1,10 +1,10 @@ # Use BR to restore data -If you use the BR to back up data, you can use it to restore the data to Nebula Graph. This topic introduces how to use the BR to restore data from backup files. +If you use the BR to back up data, you can use it to restore the data to NebulaGraph. This topic introduces how to use the BR to restore data from backup files. !!! caution - During the restoration process, the data on the target Nebula Graph cluster is removed and then is replaced with the data from the backup files. If necessary, back up the data on the target cluster. + During the restoration process, the data on the target NebulaGraph cluster is removed and then is replaced with the data from the backup files. If necessary, back up the data on the target cluster. !!! caution @@ -16,9 +16,9 @@ To restore data with the BR, do a check of these: - [Install BR and Agent](2.compile-br.md) and run Agent on each host in the cluster. -- No application is connected to the target Nebula Graph cluster. +- No application is connected to the target NebulaGraph cluster. -- Make sure that the target and the source Nebula Graph clusters have the same topology, which means that they have exactly the same number of hosts. The number of data folders for each host is consistently distributed. +- Make sure that the target and the source NebulaGraph clusters have the same topology, which means that they have exactly the same number of hosts. The number of data folders for each host is consistently distributed. ## Procedures diff --git a/docs-2.0/graph-computing/0.deploy-controller-analytics.md b/docs-2.0/graph-computing/0.deploy-controller-analytics.md index 7e53cb791e3..d9bc89bb46e 100644 --- a/docs-2.0/graph-computing/0.deploy-controller-analytics.md +++ b/docs-2.0/graph-computing/0.deploy-controller-analytics.md @@ -2,13 +2,13 @@ Dag Controller is a task scheduling tool that can schedule the jobs which type is DAG (directed acyclic graph). The job consists of multiple tasks to form a directed acyclic graph, and there is a dependency between the tasks. -The Dag Controller can perform complex graph computing with Nebula Analytics. For example, the Dag Controller sends an algorithm request to Nebula Analytics, which saves the result to Nebula Graph or HDFS. The Dag Controller then takes the result as input to the next algorithmic task to create a new task. +The Dag Controller can perform complex graph computing with Nebula Analytics. For example, the Dag Controller sends an algorithm request to Nebula Analytics, which saves the result to NebulaGraph or HDFS. The Dag Controller then takes the result as input to the next algorithmic task to create a new task. This topic describes how to use the Dag Controller. !!! enterpriseonly - Only available for the Nebula Graph Enterprise Edition. + Only available for the NebulaGraph Enterprise Edition. ## Prerequisites @@ -139,7 +139,7 @@ After the Nebula Analytics and the Dag Controller are configured and started, yo ### Will the Dag Controller service crash if the Graph service returns too much result data? -The Dag Controller service only provides scheduling capabilities and will not crash, but the Nebula Analytics service may crash due to insufficient memory when writing too much data to HDFS or Nebula Graph, or reading too much data from HDFS or Nebula Graph. +The Dag Controller service only provides scheduling capabilities and will not crash, but the Nebula Analytics service may crash due to insufficient memory when writing too much data to HDFS or NebulaGraph, or reading too much data from HDFS or NebulaGraph. ### Can I continue a job from a failed task? diff --git a/docs-2.0/graph-computing/algorithm-description.md b/docs-2.0/graph-computing/algorithm-description.md index 995fac9ac20..8caa82571df 100644 --- a/docs-2.0/graph-computing/algorithm-description.md +++ b/docs-2.0/graph-computing/algorithm-description.md @@ -1,9 +1,9 @@ # Algorithm overview -Graph computing can detect the graph structure, such as the communities in a graph and the division of a graph. It can also reveal the inherent characteristics of the correlation between various vertexes, such as the centrality and similarity of the vertices. This topic introduces the algorithms and parameters supported by Nebula Graph. +Graph computing can detect the graph structure, such as the communities in a graph and the division of a graph. It can also reveal the inherent characteristics of the correlation between various vertexes, such as the centrality and similarity of the vertices. This topic introduces the algorithms and parameters supported by NebulaGraph. !!! note @@ -22,7 +22,7 @@ Nebula Graph supports some graph computing tools. This topic describes the algor - If the data source comes from HDFS, users need to specify a CSV file that contains `src` and `dst` columns. Some algorithms also need to contain a `weight` column. - - If the data source comes from Nebula Graph, users need to specify the edge types that provide `src` and `dst` columns. Some algorithms also need to specify the properties of the edge types as `weight` columns. + - If the data source comes from NebulaGraph, users need to specify the edge types that provide `src` and `dst` columns. Some algorithms also need to specify the properties of the edge types as `weight` columns. ## Node importance measurement diff --git a/docs-2.0/graph-computing/nebula-algorithm.md b/docs-2.0/graph-computing/nebula-algorithm.md index 9cb5711e0c2..30330fa2b2b 100644 --- a/docs-2.0/graph-computing/nebula-algorithm.md +++ b/docs-2.0/graph-computing/nebula-algorithm.md @@ -1,12 +1,12 @@ # Nebula Algorithm -[Nebula Algorithm](https://github.com/vesoft-inc/nebula-algorithm) (Algorithm) is a Spark application based on [GraphX](https://spark.apache.org/graphx/). It uses a complete algorithm tool to perform graph computing on the data in the Nebula Graph database by submitting a Spark task. You can also programmatically use the algorithm under the lib repository to perform graph computing on DataFrame. +[Nebula Algorithm](https://github.com/vesoft-inc/nebula-algorithm) (Algorithm) is a Spark application based on [GraphX](https://spark.apache.org/graphx/). It uses a complete algorithm tool to perform graph computing on the data in the NebulaGraph database by submitting a Spark task. You can also programmatically use the algorithm under the lib repository to perform graph computing on DataFrame. ## Version compatibility -The correspondence between the Nebula Algorithm release and the Nebula Graph core release is as follows. +The correspondence between the Nebula Algorithm release and the NebulaGraph core release is as follows. -|Nebula Algorithm|Nebula Graph| +|Nebula Algorithm|NebulaGraph| |:---|:---| |3.0-SNAPSHOT | nightly | |{{algorithm.release}}| {{nebula.release}} | @@ -18,7 +18,7 @@ The correspondence between the Nebula Algorithm release and the Nebula Graph cor Before using the Nebula Algorithm, users need to confirm the following information: -- The Nebula Graph services have been deployed and started. For details, see [Nebula Installation](4.deployment-and-installation/1.resource-preparations.md). +- The NebulaGraph services have been deployed and started. For details, see [Nebula Installation](4.deployment-and-installation/1.resource-preparations.md). - The Spark version is 2.4.x. @@ -60,13 +60,13 @@ The graph computing algorithms supported by Nebula Algorithm are as follows. !!! note - When writing the algorithm results into the Nebula Graph, make sure that the tag in the corresponding graph space has properties names and data types corresponding to the table above. + When writing the algorithm results into the NebulaGraph, make sure that the tag in the corresponding graph space has properties names and data types corresponding to the table above. ## Implementation methods Nebula Algorithm implements the graph calculating as follows: -1. Read the graph data of DataFrame from the Nebula Graph database using the Nebula Spark Connector. +1. Read the graph data of DataFrame from the NebulaGraph database using the Nebula Spark Connector. 2. Transform the graph data of DataFrame to the GraphX graph. @@ -121,7 +121,7 @@ The `lib` repository provides 10 common graph algorithms. 2. Use the algorithm (take PageRank as an example) by filling in parameters. For more examples, see [example](https://github.com/vesoft-inc/nebula-algorithm/tree/master/example/src/main/scala/com/vesoft/nebula/algorithm). !!! note - By default, the DataFrame that executes the algorithm sets the first column as the starting vertex, the second column as the destination vertex, and the third column as the edge weights (not the rank in the Nebula Graph). + By default, the DataFrame that executes the algorithm sets the first column as the starting vertex, the second column as the destination vertex, and the third column as the edge weights (not the rank in the NebulaGraph). ```bash val prConfig = new PRConfig(5, 1.0) @@ -133,7 +133,7 @@ The `lib` repository provides 10 common graph algorithms. ### Submit the algorithm package directly !!! note - There are limitations to use sealed packages. For example, when sinking a repository into Nebula Graph, the property name of the tag created in the sunk graph space must match the preset name in the code. The first method is recommended if the user has development skills. + There are limitations to use sealed packages. For example, when sinking a repository into NebulaGraph, the property name of the tag created in the sunk graph space must match the preset name in the code. The first method is recommended if the user has development skills. 1. Set the [Configuration file](https://github.com/vesoft-inc/nebula-algorithm/blob/{{algorithm.branch}}/nebula-algorithm/src/main/resources/application.conf). @@ -158,38 +158,38 @@ The `lib` repository provides 10 common graph algorithms. hasWeight: false } - # Configurations related to Nebula Graph + # Configurations related to NebulaGraph nebula: { - # Data source. When Nebula Graph is the data source of the graph computing, the configuration of `nebula.read` is valid. + # Data source. When NebulaGraph is the data source of the graph computing, the configuration of `nebula.read` is valid. read: { # The IP addresses and ports of all Meta services. Multiple addresses are separated by commas (,). Example: "ip1:port1,ip2:port2". - # To deploy Nebula Graph by using Docker Compose, fill in the port with which Docker Compose maps to the outside. + # To deploy NebulaGraph by using Docker Compose, fill in the port with which Docker Compose maps to the outside. # Check the status with `docker-compose ps`. metaAddress: "192.168.*.10:9559" - # The name of the graph space in Nebula Graph. + # The name of the graph space in NebulaGraph. space: basketballplayer - # Edge types in Nebula Graph. When there are multiple labels, the data of multiple edges will be merged. + # Edge types in NebulaGraph. When there are multiple labels, the data of multiple edges will be merged. labels: ["serve"] - # The property name of each edge type in Nebula Graph. This property will be used as the weight column of the algorithm. Make sure that it corresponds to the edge type. + # The property name of each edge type in NebulaGraph. This property will be used as the weight column of the algorithm. Make sure that it corresponds to the edge type. weightCols: ["start_year"] } - # Data sink. When the graph computing result sinks into Nebula Graph, the configuration of `nebula.write` is valid. + # Data sink. When the graph computing result sinks into NebulaGraph, the configuration of `nebula.write` is valid. write:{ # The IP addresses and ports of all Graph services. Multiple addresses are separated by commas (,). Example: "ip1:port1,ip2:port2". # To deploy by using Docker Compose, fill in the port with which Docker Compose maps to the outside. # Check the status with `docker-compose ps`. graphAddress: "192.168.*.11:9669" # The IP addresses and ports of all Meta services. Multiple addresses are separated by commas (,). Example: "ip1:port1,ip2:port2". - # To deploy Nebula Graph by using Docker Compose, fill in the port with which Docker Compose maps to the outside. + # To deploy NebulaGraph by using Docker Compose, fill in the port with which Docker Compose maps to the outside. # Check the staus with `docker-compose ps`. metaAddress: "192.168.*.12:9559" user:root pswd:nebula # Before submitting the graph computing task, create the graph space and tag. - # The name of the graph space in Nebula Graph. + # The name of the graph space in NebulaGraph. space:nb - # The name of the tag in Nebula Graph. The graph computing result will be written into this tag. The property name of this tag is as follows. + # The name of the tag in NebulaGraph. The graph computing result will be written into this tag. The property name of this tag is as follows. # PageRank: pagerank # Louvain: louvain # ConnectedComponent: cc diff --git a/docs-2.0/graph-computing/nebula-analytics.md b/docs-2.0/graph-computing/nebula-analytics.md index 7b0f3f28c3a..6166e2f4abb 100644 --- a/docs-2.0/graph-computing/nebula-analytics.md +++ b/docs-2.0/graph-computing/nebula-analytics.md @@ -1,25 +1,25 @@ # Nebula Analytics -Nebula Analytics is a high-performance graph computing framework tool that performs graph analysis of data in the Nebula Graph database. +Nebula Analytics is a high-performance graph computing framework tool that performs graph analysis of data in the NebulaGraph database. !!! enterpriseonly - Only available for the Nebula Graph Enterprise Edition. + Only available for the NebulaGraph Enterprise Edition. ## Scenarios -You can import data from data sources as Nebula Graph clusters, CSV files on HDFS, or local CSV files into Nebula Analytics and export the graph computation results to Nebula Graph clusters, CSV files on HDFS, or local CSV files from Nebula Analytics. +You can import data from data sources as NebulaGraph clusters, CSV files on HDFS, or local CSV files into Nebula Analytics and export the graph computation results to NebulaGraph clusters, CSV files on HDFS, or local CSV files from Nebula Analytics. ## Limitations -When you import Nebula Graph cluster data into Nebula Analytics and export the graph computation results from Nebula Analytics to a Nebula Graph cluster, the graph computation results can only be exported to the graph space where the data source is located. +When you import NebulaGraph cluster data into Nebula Analytics and export the graph computation results from Nebula Analytics to a NebulaGraph cluster, the graph computation results can only be exported to the graph space where the data source is located. ## Version compatibility -The version correspondence between Nebula Analytics and Nebula Graph is as follows. +The version correspondence between Nebula Analytics and NebulaGraph is as follows. -|Nebula Analytics|Nebula Graph| +|Nebula Analytics|NebulaGraph| |:---|:---| |{{plato.release}}|{{nebula.release}}| |1.0.x|3.0.x| @@ -59,7 +59,7 @@ sudo rpm -i nebula-analytics-{{plato.release}}-centos.x86_64.rpm --prefix /home + - Download the binary file from the [GitHub releases page](https://github.com/vesoft-inc/nebula-console/releases "the nebula-console Releases page"). @@ -14,9 +14,9 @@ You can obtain Nebula Console in the following ways: ## Nebula Console functions -### Connect to Nebula Graph +### Connect to NebulaGraph -To connect to Nebula Graph with the `nebula-console` file, use the following syntax: +To connect to NebulaGraph with the `nebula-console` file, use the following syntax: ```bash -addr -port -u -p @@ -29,14 +29,14 @@ Parameter descriptions are as follows: | Parameter | Description | | - | - | | `-h/-help` | Shows the help menu. | -| `-addr/-address` | Sets the IP address of the Graph service. The default address is 127.0.0.1. | +| `-addr/-address` | Sets the IP address of the Graph service. The default address is 127.0.0.1. | | `-P/-port` | Sets the port number of the graphd service. The default port number is 9669. | -| `-u/-user` | Sets the username of your Nebula Graph account. Before enabling authentication, you can use any existing username. The default username is `root`. | -| `-p/-password` | Sets the password of your Nebula Graph account. Before enabling authentication, you can use any characters as the password. | +| `-u/-user` | Sets the username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is `root`. | +| `-p/-password` | Sets the password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. | | `-t/-timeout` | Sets an integer-type timeout threshold of the connection. The unit is second. The default value is 120. | | `-e/-eval` | Sets a string-type nGQL statement. The nGQL statement is executed once the connection succeeds. The connection stops after the result is returned. | | `-f/-file` | Sets the path of an nGQL file. The nGQL statements in the file are executed once the connection succeeds. The result will be returned and the connection stops then. | -| `-enable_ssl` | Enables SSL encryption when connecting to Nebula Graph. | +| `-enable_ssl` | Enables SSL encryption when connecting to NebulaGraph. | | `-ssl_root_ca_path` | Sets the storage path of the certification authority file. | | `-ssl_cert_path` | Sets the storage path of the certificate file. | | `-ssl_private_key_path` | Sets the storage path of the private key file. | @@ -200,9 +200,9 @@ This command will make Nebula Console sleep for N seconds. The schema is altered nebula> :sleep N ``` -### Disconnect Nebula Console from Nebula Graph +### Disconnect Nebula Console from NebulaGraph -You can use `:EXIT` or `:QUIT` to disconnect from Nebula Graph. For convenience, Nebula Console supports using these commands in lower case without the colon (":"), such as `quit`. +You can use `:EXIT` or `:QUIT` to disconnect from NebulaGraph. For convenience, Nebula Console supports using these commands in lower case without the colon (":"), such as `quit`. The example is as follows: diff --git a/docs-2.0/nebula-dashboard-ent/1.what-is-dashboard-ent.md b/docs-2.0/nebula-dashboard-ent/1.what-is-dashboard-ent.md index a504de36b3e..595d841f715 100644 --- a/docs-2.0/nebula-dashboard-ent/1.what-is-dashboard-ent.md +++ b/docs-2.0/nebula-dashboard-ent/1.what-is-dashboard-ent.md @@ -1,6 +1,6 @@ # What is Nebula Dashboard Enterprise Edition -Nebula Dashboard Enterprise Edition (Dashboard for short) is a visualization tool that monitors and manages the status of machines and services in Nebula Graph clusters. This topic introduces Dashboard Enterprise Edition. For more information, see [What is Nebula Dashboard Community Edition](../nebula-dashboard/1.what-is-dashboard.md). +Nebula Dashboard Enterprise Edition (Dashboard for short) is a visualization tool that monitors and manages the status of machines and services in NebulaGraph clusters. This topic introduces Dashboard Enterprise Edition. For more information, see [What is Nebula Dashboard Community Edition](../nebula-dashboard/1.what-is-dashboard.md). !!! Note @@ -10,7 +10,7 @@ Nebula Dashboard Enterprise Edition (Dashboard for short) is a visualization too ## Features -- Create a Nebula Graph cluster of a specified version, import nodes in batches, scale out Nebula Graph services with one click +- Create a NebulaGraph cluster of a specified version, import nodes in batches, scale out NebulaGraph services with one click - Import clusters, balance data, scale out or in on the visualization interface. @@ -44,7 +44,7 @@ Nebula Dashboard Enterprise Edition (Dashboard for short) is a visualization too - The monitoring data will be retained for 14 days by default, that is, only the monitoring data within the last 14 days can be queried. -- The version of Nebula Graph must be 2.5.0 or later. +- The version of NebulaGraph must be 2.5.0 or later. - It is recommend to use the latest version of Chrome to access Dashboard. @@ -56,9 +56,9 @@ Nebula Dashboard Enterprise Edition (Dashboard for short) is a visualization too ## Version compatibility -The version correspondence between Nebula Graph and Dashboard Enterprise Edition is as follows. +The version correspondence between NebulaGraph and Dashboard Enterprise Edition is as follows. -|Nebula Graph version|Dashboard version| +|NebulaGraph version|Dashboard version| |:---|:---| |2.5.0 ~ 3.1.0|3.1.0| |2.5.x ~ 3.1.0|3.0.4| diff --git a/docs-2.0/nebula-dashboard-ent/11.manage-package.md b/docs-2.0/nebula-dashboard-ent/11.manage-package.md index da13649afa6..6d11d837cbb 100644 --- a/docs-2.0/nebula-dashboard-ent/11.manage-package.md +++ b/docs-2.0/nebula-dashboard-ent/11.manage-package.md @@ -1,6 +1,6 @@ # Package management -Nebula Dashboard Enterprise Edition supports managing Nebula Graph installation packages, such as downloading the community edition installation packages or manually uploading the installation packages. +Nebula Dashboard Enterprise Edition supports managing NebulaGraph installation packages, such as downloading the community edition installation packages or manually uploading the installation packages. ## Precautions diff --git a/docs-2.0/nebula-dashboard-ent/2.deploy-connect-dashboard-ent.md b/docs-2.0/nebula-dashboard-ent/2.deploy-connect-dashboard-ent.md index f0f282fe382..2a50c410a39 100644 --- a/docs-2.0/nebula-dashboard-ent/2.deploy-connect-dashboard-ent.md +++ b/docs-2.0/nebula-dashboard-ent/2.deploy-connect-dashboard-ent.md @@ -6,7 +6,7 @@ This topic will introduce how to deploy Dashboard Enterprise Edition in detail. Before deploying Dashboard Enterprise Edition, you must do a check of these: -- Select and download Dashboard Enterprise Edition of the correct version. For information about the version correspondence between Dashboard Enterprise Edition and Nebula Graph, see [Version compatibility](1.what-is-dashboard-ent.md). +- Select and download Dashboard Enterprise Edition of the correct version. For information about the version correspondence between Dashboard Enterprise Edition and NebulaGraph, see [Version compatibility](1.what-is-dashboard-ent.md). - MySQL and SQLite are supported to store Dashboard metadata. To use MySQL, make sure that the environment of [MySQL](https://www.mysql.com/) is ready and a MySQL database named as `dashboard` is create. Make sure the default character set of the database is `utf8`. diff --git a/docs-2.0/nebula-dashboard-ent/3.create-import-dashboard/1.create-cluster.md b/docs-2.0/nebula-dashboard-ent/3.create-import-dashboard/1.create-cluster.md index f7dc20ca255..f9fe873b8ee 100644 --- a/docs-2.0/nebula-dashboard-ent/3.create-import-dashboard/1.create-cluster.md +++ b/docs-2.0/nebula-dashboard-ent/3.create-import-dashboard/1.create-cluster.md @@ -11,17 +11,17 @@ You can create a cluster following these steps: 3. On the **Create cluster** page, fill in the following: - Enter a **Cluster Name**, up to 15 characters for each name. In this example, the cluster name is `test`. - - Choose a Nebula Graph version to install. In this example, the version is `Enterprise v3.1.0`. + - Choose a NebulaGraph version to install. In this example, the version is `Enterprise v3.1.0`. !!! note - Only one Enterprise Edition of Nebula Graph is provided for you to choose from on the **Create cluster** page. To install other versions of Nebula Graph, you can download or upload the corresponding installer package on the **Package Management** page. For details, see [Package management](../11.manage-package.md). + Only one Enterprise Edition of NebulaGraph is provided for you to choose from on the **Create cluster** page. To install other versions of NebulaGraph, you can download or upload the corresponding installer package on the **Package Management** page. For details, see [Package management](../11.manage-package.md). - Click **Upload License**. !!! note - For the creation of a Community version of Nebula Graph, skip this step to upload the License file. + For the creation of a Community version of NebulaGraph, skip this step to upload the License file. - **Add nodes**. The information of each node is required. @@ -29,13 +29,13 @@ You can create a cluster following these steps: 1. Enter the IP information of each host. In this example, it is `192.168.8.129`. 2. Enter the SSH information. In this example, the SSH port is `22`, the SSH user is `vesoft`, and the SSH password is `nebula`. - 3. Choose the target Nebula Graph package. In this example, the package is `nebula-graph-ent-3.1.0-ent.el7.x86_64.rpm`. + 3. Choose the target NebulaGraph package. In this example, the package is `nebula-graph-ent-3.1.0-ent.el7.x86_64.rpm`. 4. Customize the cluster installation path. In this example, the default path is `.nebula/cluster`. 5. (Optional) Enter the node name to make a note on the node. In this example, the note is `Node_1`. - **Import nodes in batches**. The information of each node is required. To import nodes in batches, you need to choose the installation package and click **download the CSV template**. Fill in the template and upload it. Ensure that the node is correct, otherwise, upload failure may happen. -4. Select the node and add the service you need in the upper right corner. To create a cluster, you need to add 3 types of services to the node. If not familiar with the Nebula Graph architecture, click **Auto add service**. +4. Select the node and add the service you need in the upper right corner. To create a cluster, you need to add 3 types of services to the node. If not familiar with the NebulaGraph architecture, click **Auto add service**. ![add-service](https://docs-cdn.nebula-graph.com.cn/figures/add-service-2022-04-08_en.png) diff --git a/docs-2.0/nebula-dashboard-ent/3.create-import-dashboard/2.import-cluster.md b/docs-2.0/nebula-dashboard-ent/3.create-import-dashboard/2.import-cluster.md index 878632f012e..7fa7f23a3d7 100644 --- a/docs-2.0/nebula-dashboard-ent/3.create-import-dashboard/2.import-cluster.md +++ b/docs-2.0/nebula-dashboard-ent/3.create-import-dashboard/2.import-cluster.md @@ -6,38 +6,38 @@ This topic introduces how to import clusters using Dashboard. The current versio !!! caution - In the same cluster, the service versions need to be unified. Importing Nebula Graph examples from different versions in the same cluster is not supported. + In the same cluster, the service versions need to be unified. Importing NebulaGraph examples from different versions in the same cluster is not supported. -1. In the configuration files of each service, change the IP in `_server_addrs` and `local_ip` to the server's IP, and then start Nebula Graph. +1. In the configuration files of each service, change the IP in `_server_addrs` and `local_ip` to the server's IP, and then start NebulaGraph. - For details, see [Configurations](../../5.configurations-and-logs/1.configurations/1.configurations.md) and [Manage Nebula Graph services](../../4.deployment-and-installation/manage-service.md). + For details, see [Configurations](../../5.configurations-and-logs/1.configurations/1.configurations.md) and [Manage NebulaGraph services](../../4.deployment-and-installation/manage-service.md). 2. On the **Cluster management** page, click **Import cluster**. -3. On the **Import cluster** page, enter the information of **Connect to Nebula Graph**. +3. On the **Import cluster** page, enter the information of **Connect to NebulaGraph**. - Graphd Host: :n. In this example, the IP is `192.168.8.157:9669`. - - Username: The account to connect to Nebula Graph. In this example, the username is `vesoft`. - - Password: The password to connect to Nebula Graph. In this example, the password is `nebula`. + - Username: The account to connect to NebulaGraph. In this example, the username is `vesoft`. + - Password: The password to connect to NebulaGraph. In this example, the password is `nebula`. !!! note - By default, authentication is disabled in Nebula Graph. Therefore, you can use `root` as the username and any password to connect to Nebula Graph. - When authentication is enabled in Nebula Graph, you need to use the specified username and password to connect to Nebula Graph. For details of authentication, see [Nebula Graph manual](../../7.data-security/1.authentication/1.authentication.md "Click to go to Nebula Graph website"). + By default, authentication is disabled in NebulaGraph. Therefore, you can use `root` as the username and any password to connect to NebulaGraph. + When authentication is enabled in NebulaGraph, you need to use the specified username and password to connect to NebulaGraph. For details of authentication, see [NebulaGraph manual](../../7.data-security/1.authentication/1.authentication.md "Click to go to NebulaGraph website"). -4. On the Nebula Graph connection panel, fill in the following: +4. On the NebulaGraph connection panel, fill in the following: - Enter the cluster name, 15 characters at most. In this example, the cluster name is `create_1027`, and choose whether to use `sudo` to connect to the cluster. !!! notice - If your SSH account does not have permission for the Nebula Graph cluster, you can use `sudo` to connect to it. + If your SSH account does not have permission for the NebulaGraph cluster, you can use `sudo` to connect to it. - **Authorize** the node. The SSH username and password of each node are required, and choose to run `sudo` or not. !!! notice - If your SSH account has no permission to operate Nebula Graph, but can execute `sudo` commands without password, set **use sudo** to **yes**. + If your SSH account has no permission to operate NebulaGraph, but can execute `sudo` commands without password, set **use sudo** to **yes**. - **Batch authorization** requires uploading the CSV file. Edit the authentication information of each node according to the downloaded CSV file. Ensure that the node information is correct, otherwise upload failure may happen. diff --git a/docs-2.0/nebula-dashboard-ent/4.cluster-operator/1.overview.md b/docs-2.0/nebula-dashboard-ent/4.cluster-operator/1.overview.md index 43cea04ac40..b0cdb997b6b 100644 --- a/docs-2.0/nebula-dashboard-ent/4.cluster-operator/1.overview.md +++ b/docs-2.0/nebula-dashboard-ent/4.cluster-operator/1.overview.md @@ -38,7 +38,7 @@ In this part, you can view the information of **Cluster Name**, **Creation Time* The parameter **Expiration Time** is displayed only if the created or imported cluster is an Enterprise Edition cluster. - **Creator**:The Dashboard account that is used to create the cluster. -- **Version**:The version of Nebula Graph installed in the cluster. The **Version Upgrade** button is displayed on the right to go to the page of [version upgrade](4.manage.md). +- **Version**:The version of NebulaGraph installed in the cluster. The **Version Upgrade** button is displayed on the right to go to the page of [version upgrade](4.manage.md). In the upper right of the **Information** section, click ![watch](https://docs-cdn.nebula-graph.com.cn/figures/watch.png) to view the cluster details, including name, creation time, account name, version, and the role of the account name. diff --git a/docs-2.0/nebula-dashboard-ent/4.cluster-operator/3.cluster-information.md b/docs-2.0/nebula-dashboard-ent/4.cluster-operator/3.cluster-information.md index 658c9d6bbcf..9001127c74c 100644 --- a/docs-2.0/nebula-dashboard-ent/4.cluster-operator/3.cluster-information.md +++ b/docs-2.0/nebula-dashboard-ent/4.cluster-operator/3.cluster-information.md @@ -1,6 +1,6 @@ # Cluster information -This topic introduces the cluster information of Dashboard from two parts **Overview Info** and **Cluster Diagnostics**. The **Overview Info** section displays the overview information of the Nebula Graph cluster. The **Cluster Diagnostics** section displays the cluster Diagnostics information of the Nebula Graph cluster. +This topic introduces the cluster information of Dashboard from two parts **Overview Info** and **Cluster Diagnostics**. The **Overview Info** section displays the overview information of the NebulaGraph cluster. The **Cluster Diagnostics** section displays the cluster Diagnostics information of the NebulaGraph cluster. ## Entry @@ -12,28 +12,28 @@ This topic introduces the cluster information of Dashboard from two parts **Over !!! note - Before viewing the cluster information, you need to select any online Graph service address, enter the account to log in to Nebula Graph (not the Dashboard login account), and the corresponding password. + Before viewing the cluster information, you need to select any online Graph service address, enter the account to log in to NebulaGraph (not the Dashboard login account), and the corresponding password. !!! caution - You need to ensure that Nebula Graph services have been deployed and started. For more information, see [Nebula Graph installation and deployment](../../4.deployment-and-installation/1.resource-preparations.md "Click to go to Nebula Graph installation and deployment"). + You need to ensure that NebulaGraph services have been deployed and started. For more information, see [NebulaGraph installation and deployment](../../4.deployment-and-installation/1.resource-preparations.md "Click to go to NebulaGraph installation and deployment"). ![coreinfo](https://docs-cdn.nebula-graph.com.cn/figures/clustercore-info_2022-04-11_en.png) -On the **Overview Info** page, you can see the information of the Nebula Graph cluster, including Storage leader distribution, Storage service details, versions and hosts information of each Nebula Graph service, and partition distribution and details. +On the **Overview Info** page, you can see the information of the NebulaGraph cluster, including Storage leader distribution, Storage service details, versions and hosts information of each NebulaGraph service, and partition distribution and details. ## Storage Leader Distribution In this section, the number of Leaders and the Leader distribution will be shown. -- Click the **Balance Leader** button in the upper right corner to distribute Leaders evenly and quickly in the Nebula Graph cluster. For details about the Leader, see [Storage Service](../../1.introduction/3.nebula-graph-architecture/4.storage-service.md). +- Click the **Balance Leader** button in the upper right corner to distribute Leaders evenly and quickly in the NebulaGraph cluster. For details about the Leader, see [Storage Service](../../1.introduction/3.nebula-graph-architecture/4.storage-service.md). - Click **Detail** in the upper right corner to view the details of the Leader distribution. ### Version -In this section, the version and host information of each Nebula Graph service will be shown. Click **Detail** in the upper right corner to view the details of the version and host information. +In this section, the version and host information of each NebulaGraph service will be shown. Click **Detail** in the upper right corner to view the details of the version and host information. ## Service information diff --git a/docs-2.0/nebula-dashboard-ent/4.cluster-operator/4.manage.md b/docs-2.0/nebula-dashboard-ent/4.cluster-operator/4.manage.md index a68d143555f..b2bab4afb41 100644 --- a/docs-2.0/nebula-dashboard-ent/4.cluster-operator/4.manage.md +++ b/docs-2.0/nebula-dashboard-ent/4.cluster-operator/4.manage.md @@ -7,7 +7,7 @@ This topic introduces the cluster operation of Dashboard, including cluster node On this page, the information of all nodes will be shown, including the cluster name, Host(SSH_User), CPU (Core), etc. -- To add a node quickly, click **Add Node** and enter the following information, the Host, SSH port, SSH user, SSH password, and select a Nebula Graph package. +- To add a node quickly, click **Add Node** and enter the following information, the Host, SSH port, SSH user, SSH password, and select a NebulaGraph package. - Click the ![plus](https://docs-cdn.nebula-graph.com.cn/figures/Plus.png) button to view the process name, service type, status, runtime directory of the corresponding node. @@ -88,7 +88,7 @@ You can follow the below steps to add cluster members. ## Version Upgrade -Nebula Dashboard Enterprise Edition supports upgrading the version of the existing Nebula Graph cluster. +Nebula Dashboard Enterprise Edition supports upgrading the version of the existing NebulaGraph cluster. !!! caution @@ -98,7 +98,7 @@ Nebula Dashboard Enterprise Edition supports upgrading the version of the existi !!! note - - Only supports upgrading the Nebula Graph cluster that version greater than **3.0.0**. + - Only supports upgrading the NebulaGraph cluster that version greater than **3.0.0**. - Do not supports upgrading cluster across major version. - The community edition can be upgraded to the enterprise edition by uploading and verifying licenses, and the enterprise edition can be upgraded to the community edition. - The cluster can be upgraded to a minor version in the current major version, including a smaller version than the current minor version. @@ -106,7 +106,7 @@ Nebula Dashboard Enterprise Edition supports upgrading the version of the existi 1. At the top navigation bar of the Dashboard Enterprise Edition page, click **Cluster Management**. 2. On the right side of the target cluster, click **Detail**. 3. On the left-side navigation bar of the page, click **Operation**->**Version Upgrade**. -4. On the **Version Upgrade** page, confirm **Current Nebula Graph version**, select the upgrade version and then click **Next**. +4. On the **Version Upgrade** page, confirm **Current NebulaGraph version**, select the upgrade version and then click **Next**. !!! note @@ -115,5 +115,5 @@ Nebula Dashboard Enterprise Edition supports upgrading the version of the existi 5. Perform the upgrade check, and then click **Next**. The cluster will be shut down during the upgrade and automatically restart the services after the upgrade. You can use the **diagnostics report** to help you judge whether the timing to upgrade is suitable. -6. Confirm the upgrade information again, including **Cluster Name**, **Current Nebula Graph Version** and **Upgrade Nebula Graph Version**, then click **Upgrade**. +6. Confirm the upgrade information again, including **Cluster Name**, **Current NebulaGraph Version** and **Upgrade NebulaGraph Version**, then click **Upgrade**. Users can view the upgrade task information in [task center](../10.tasks.md), the task type is `version update`. diff --git a/docs-2.0/nebula-dashboard-ent/4.cluster-operator/7.cluster-diagnosis.md b/docs-2.0/nebula-dashboard-ent/4.cluster-operator/7.cluster-diagnosis.md index 63f8ca598ab..0a82de04edf 100644 --- a/docs-2.0/nebula-dashboard-ent/4.cluster-operator/7.cluster-diagnosis.md +++ b/docs-2.0/nebula-dashboard-ent/4.cluster-operator/7.cluster-diagnosis.md @@ -70,13 +70,13 @@ A diagnostic report contains the following information: | Parameter | Description | | ---------------------------------- | ---- | | `HOST` | The IP address of the node. | - | `INSTANCE` | The number of Nebula Graph services deployed on this node. Such as: `metad*1 graphd*1 storaged*1`. | + | `INSTANCE` | The number of NebulaGraph services deployed on this node. Such as: `metad*1 graphd*1 storaged*1`. | | `CPU` | The number of CPU cores. Unit: Core. | | `MEMORY` | The memory size of the node. Unit: GB. | | `DISK` | The disk size of the node. Unit: GB. | -- **Service Info**: Displays the type, node IP, HTTP port, and operational status of each Nebula Graph service. +- **Service Info**: Displays the type, node IP, HTTP port, and operational status of each NebulaGraph service. - **Leader Distribution**: Displays the distribution of Leaders in Storage services. @@ -166,4 +166,4 @@ The descriptions of other parameters are as follows: Lists all configuration information for Graph, Meta, and Storage services in the current cluster. -For information about the configurations of each service in Nebula Graph, see [Configurations](../../5.configurations-and-logs/1.configurations/1.configurations.md). +For information about the configurations of each service in NebulaGraph, see [Configurations](../../5.configurations-and-logs/1.configurations/1.configurations.md). diff --git a/docs-2.0/nebula-dashboard-ent/4.cluster-operator/8.backup-and-restore.md b/docs-2.0/nebula-dashboard-ent/4.cluster-operator/8.backup-and-restore.md index ca09bf682e3..9556e2d5a10 100644 --- a/docs-2.0/nebula-dashboard-ent/4.cluster-operator/8.backup-and-restore.md +++ b/docs-2.0/nebula-dashboard-ent/4.cluster-operator/8.backup-and-restore.md @@ -1,6 +1,6 @@ -# Back up and restore Nebula Graph data +# Back up and restore NebulaGraph data -To prevent data loss due to operational errors or system failures, Nebula Graph offers the Backup & Restore (BR) tool to help users back up and restore graph data. Dashboard Enterprise Edition integrates BR capabilities and offers simple UIs that allow users to perform data backup and restore operations in just a few steps. This document describes how to use Dashboard Enterprise Edition to backup and restore Nebula Graph data. +To prevent data loss due to operational errors or system failures, NebulaGraph offers the Backup & Restore (BR) tool to help users back up and restore graph data. Dashboard Enterprise Edition integrates BR capabilities and offers simple UIs that allow users to perform data backup and restore operations in just a few steps. This document describes how to use Dashboard Enterprise Edition to backup and restore NebulaGraph data. ## Limits @@ -64,7 +64,7 @@ Data is backed up to the cloud storage service by creating a backup file as foll Environment check includes: - - Your Nebula Graph cluster is running. + - Your NebulaGraph cluster is running. - The access key to log onto the storage service has not expired. - The status of business traffic. It only checks if the QPS of your business is 0. When QPS is not 0, you are prompted to back up data during off-peak hours. @@ -116,7 +116,7 @@ Follow the steps below to restore data. Environment check includes: - - Your Nebula Graph cluster is running. + - Your NebulaGraph cluster is running. - The access key to log onto the storage service has not expired. - No business website traffic. diff --git a/docs-2.0/nebula-dashboard-ent/6.global-config.md b/docs-2.0/nebula-dashboard-ent/6.global-config.md index b5b41971459..1de2b498cc6 100644 --- a/docs-2.0/nebula-dashboard-ent/6.global-config.md +++ b/docs-2.0/nebula-dashboard-ent/6.global-config.md @@ -48,7 +48,7 @@ On the left-side navigation bar of the **Interface Settings** page, click **Othe ## Help center -At the top navigation bar of the Dashboard Enterprise Edition, click **Help**. On the Help page, you can jump to Dashboard Docs, Nebula Graph Docs, Nebula Graph Website, or Nebula Graph Forum. +At the top navigation bar of the Dashboard Enterprise Edition, click **Help**. On the Help page, you can jump to Dashboard Docs, NebulaGraph Docs, NebulaGraph Website, or NebulaGraph Forum. ## User information diff --git a/docs-2.0/nebula-dashboard-ent/7.monitor-parameter.md b/docs-2.0/nebula-dashboard-ent/7.monitor-parameter.md index 4f6e7e4c058..df7b2457491 100644 --- a/docs-2.0/nebula-dashboard-ent/7.monitor-parameter.md +++ b/docs-2.0/nebula-dashboard-ent/7.monitor-parameter.md @@ -79,7 +79,7 @@ The period is the time range of counting metrics. It currently supports 5 second !!! note - Dashboard collects the following metrics from the Nebula Graph core, but only shows the metrics that are important to it. + Dashboard collects the following metrics from the NebulaGraph core, but only shows the metrics that are important to it. {% include "/source-monitoring-metrics.md" %} diff --git a/docs-2.0/nebula-dashboard-ent/8.faq.md b/docs-2.0/nebula-dashboard-ent/8.faq.md index 682a9b2814c..01697b8fe74 100644 --- a/docs-2.0/nebula-dashboard-ent/8.faq.md +++ b/docs-2.0/nebula-dashboard-ent/8.faq.md @@ -4,9 +4,9 @@ This topic lists the frequently asked questions for using Nebula Dashboard. You ## "What are Cluster, Node, and Service?" -- Cluster: refers to a group of systems composed of nodes where multiple Nebula Graph services are located. +- Cluster: refers to a group of systems composed of nodes where multiple NebulaGraph services are located. -- Node: refers to the physical or virtual machine hosting Nebula Graph services. +- Node: refers to the physical or virtual machine hosting NebulaGraph services. - Service: refers to Nebula services, including Metad, Storaged, and Graphd services. @@ -24,11 +24,11 @@ Managing clusters requires the SSH information of the corresponding node. Theref ## "What is scaling?" -Nebula Graph is a distributed graph database that supports dynamic scaling services at runtime. Therefore, you can dynamically scale Storaged and Graphd services through Dashboard. The Metad service cannot be scaled. +NebulaGraph is a distributed graph database that supports dynamic scaling services at runtime. Therefore, you can dynamically scale Storaged and Graphd services through Dashboard. The Metad service cannot be scaled. ## "Why cannot operate on the Metad service?" -The Metad service stores the metadata of the Nebula Graph database. Once the Metad service fails to function, the entire cluster may break down. Besides, the amount of data processed by the Metad service is not much, so it is not recommended to scale the Metad service. And we directly disabled operating on the Metad service in Dashboard to prevent the cluster from being unavailable due to the misoperation of users. +The Metad service stores the metadata of the NebulaGraph database. Once the Metad service fails to function, the entire cluster may break down. Besides, the amount of data processed by the Metad service is not much, so it is not recommended to scale the Metad service. And we directly disabled operating on the Metad service in Dashboard to prevent the cluster from being unavailable due to the misoperation of users. ## "What impact will the scaling have on the data?" @@ -45,19 +45,19 @@ The Metad service stores the metadata of the Nebula Graph database. Once the Met - Make sure that the license is not expired. -You can also execute `cat logs/webserver.log` in the Dashboard directory to view the startup information of each module. If the above conditions are met but Dashboard still cannot be started, go to [Nebula Graph Official Forum](https://discuss.nebula-graph.io/ "Click to go to Nebula Graph Official Forum") for consultation. +You can also execute `cat logs/webserver.log` in the Dashboard directory to view the startup information of each module. If the above conditions are met but Dashboard still cannot be started, go to [NebulaGraph Official Forum](https://discuss.nebula-graph.io/ "Click to go to NebulaGraph Official Forum") for consultation. -## "Can I add the Nebula Graph installation package manually?" +## "Can I add the NebulaGraph installation package manually?" -You can add the installation package manually in Dashboard. To download the system and RPM/DEB package you need, see [How to download Nebula Graph](https://nebula-graph.io/download/) and add the package to `nebula-dashboard-ent/download/nebula-graph`. And you can select the added package for deployment when creating and scaling out a cluster. +You can add the installation package manually in Dashboard. To download the system and RPM/DEB package you need, see [How to download NebulaGraph](https://nebula-graph.io/download/) and add the package to `nebula-dashboard-ent/download/nebula-graph`. And you can select the added package for deployment when creating and scaling out a cluster. +When importing a cluster, you need to access the path where the NebulaGraph services are installed. If the service account does not have access privileges, the cluster cannot be imported successfully. You can grant access to the service to the account (e.g. `sudo chown -R tom:tom nebula`) and restart the service with the account. --> ## Why does it prompt “SSH connection error” when importing a cluster? -If **Service Host** shows `127.0.0.1`, and your Dashboard and Nebula Graph are deployed on the same machine when authorizing service hosts, the system will prompt "SSH connection error”. You need to change the Host IP of each service to the real machine IP in the configuration files of all Nebula Graph services. For more information, see [Configuration management](../5.configurations-and-logs/1.configurations/1.configurations.md). +If **Service Host** shows `127.0.0.1`, and your Dashboard and NebulaGraph are deployed on the same machine when authorizing service hosts, the system will prompt "SSH connection error”. You need to change the Host IP of each service to the real machine IP in the configuration files of all NebulaGraph services. For more information, see [Configuration management](../5.configurations-and-logs/1.configurations/1.configurations.md). If you import a cluster deployed with Docker, it also prompts "SSH connection error". Dashboard does not support importing a cluster deployed with Docker. \ No newline at end of file diff --git a/docs-2.0/nebula-dashboard/1.what-is-dashboard.md b/docs-2.0/nebula-dashboard/1.what-is-dashboard.md index 186e43fdf6c..2e837e49c71 100644 --- a/docs-2.0/nebula-dashboard/1.what-is-dashboard.md +++ b/docs-2.0/nebula-dashboard/1.what-is-dashboard.md @@ -1,6 +1,6 @@ # What is Nebula Dashboard Community Edition -Nebula Dashboard Community Edition (Dashboard for short) is a visualization tool that monitors the status of machines and services in Nebula Graph clusters. This topic introduces Dashboard Community Edition. For details of Dashboard Enterprise Edition, refer to [What is Nebula Dashboard Enterprise Edition](../nebula-dashboard-ent/1.what-is-dashboard-ent.md). +Nebula Dashboard Community Edition (Dashboard for short) is a visualization tool that monitors the status of machines and services in NebulaGraph clusters. This topic introduces Dashboard Community Edition. For details of Dashboard Enterprise Edition, refer to [What is Nebula Dashboard Enterprise Edition](../nebula-dashboard-ent/1.what-is-dashboard-ent.md). !!! enterpriseonly @@ -40,9 +40,9 @@ You can use Dashboard in one of the following scenarios: ## Version compatibility -The version correspondence between Nebula Graph and Dashboard Community Edition is as follows. +The version correspondence between NebulaGraph and Dashboard Community Edition is as follows. -|Nebula Graph version|Dashboard version| +|NebulaGraph version|Dashboard version| |:---|:---| |2.5.0 ~ 3.1.0|3.1.0| |2.5.x ~ 3.1.0|1.1.1| diff --git a/docs-2.0/nebula-dashboard/2.deploy-dashboard.md b/docs-2.0/nebula-dashboard/2.deploy-dashboard.md index 7b029dc8053..c1a3f0cfdd9 100644 --- a/docs-2.0/nebula-dashboard/2.deploy-dashboard.md +++ b/docs-2.0/nebula-dashboard/2.deploy-dashboard.md @@ -6,7 +6,7 @@ The deployment of Dashboard involves five services. This topic will describe how Before you deploy Dashboard, you must confirm that: -- The Nebula Graph services are deployed and started. For more information, see [Nebula Graph Database Manual](../2.quick-start/1.quick-start-workflow.md). +- The NebulaGraph services are deployed and started. For more information, see [NebulaGraph Database Manual](../2.quick-start/1.quick-start-workflow.md). - Before the installation starts, the following ports are not occupied. @@ -26,7 +26,7 @@ Before you deploy Dashboard, you must confirm that: Download the tar package as needed, and it is recommended to select the latest version. -| Dashboard package | Nebula Graph version | +| Dashboard package | NebulaGraph version | | :----- | :----- | | [nebula-dashboard-{{ dashboard.release }}.x86_64.tar.gz](https://oss-cdn.nebula-graph.com.cn/nebula-graph-dashboard/{{ dashboard.release }}/nebula-dashboard-{{ dashboard.release }}.x86_64.tar.gz) | 2.5.x~3.1.0 | @@ -39,7 +39,7 @@ Run `tar -xvf nebula-dashboard-{{ dashboard.release }}.x86_64.tar.gz` to decompr |node-exporter | Collects the source information of machines in the cluster, including the CPU, memory, load, disk, and network. |9100| |nebula-stats-exporter | Collects the performance metrics in the cluster, including the IP addresses, versions, and monitoring metrics (such as the number of queries, the latency of queries, the latency of heartbeats, and so on). |9200| |prometheus | The time series database that stores monitoring data. |9090| -|nebula-http-gateway | Provides HTTP ports for cluster services to execute nGQL statements to interact with the Nebula Graph database. |8090| +|nebula-http-gateway | Provides HTTP ports for cluster services to execute nGQL statements to interact with the NebulaGraph database. |8090| The above four services should be deployed as follows. @@ -182,7 +182,7 @@ After the service is started, you can enter `:8090` in the browser to check target: "127.0.0.1:9090" // The IP address and port of the prometheus service. nebulaServer: ip: "192.168.8.143" // The IP address of any Graph service. - port: 9669 // The port of the Nebula Graph. + port: 9669 // The port of the NebulaGraph. ... ``` diff --git a/docs-2.0/nebula-dashboard/3.connect-dashboard.md b/docs-2.0/nebula-dashboard/3.connect-dashboard.md index eb3e395f37b..2c6e869d31c 100644 --- a/docs-2.0/nebula-dashboard/3.connect-dashboard.md +++ b/docs-2.0/nebula-dashboard/3.connect-dashboard.md @@ -12,11 +12,11 @@ After Dashboard is deployed, you can log in and use Dashboard on the browser. 1. Confirm the IP address of the machine where the `nebula-dashboard` service is installed. Enter `:7003` in the browser to open the login page. -2. Enter the username and the passwords of the Nebula Graph database. +2. Enter the username and the passwords of the NebulaGraph database. !!! note - Ensure that you have configured the IP of the machines where your Nebula Graph is deployed in the `config.json` file. For more information, see [Deploy Dashboard](2.deploy-dashboard.md). + Ensure that you have configured the IP of the machines where your NebulaGraph is deployed in the `config.json` file. For more information, see [Deploy Dashboard](2.deploy-dashboard.md). - If authentication is enabled, you can log in with the created accounts. @@ -24,7 +24,7 @@ After Dashboard is deployed, you can log in and use Dashboard on the browser. To enable authentication, see [Authentication](../7.data-security/1.authentication/1.authentication.md). -3. Select the Nebula Graph version to be used. +3. Select the NebulaGraph version to be used. !!! note diff --git a/docs-2.0/nebula-dashboard/6.monitor-parameter.md b/docs-2.0/nebula-dashboard/6.monitor-parameter.md index 725a87723e7..7a1998a370a 100644 --- a/docs-2.0/nebula-dashboard/6.monitor-parameter.md +++ b/docs-2.0/nebula-dashboard/6.monitor-parameter.md @@ -79,7 +79,7 @@ The period is the time range of counting metrics. It currently supports 5 second !!! note - Dashboard collects the following metrics from the Nebula Graph core, but only shows the metrics that are important to it. + Dashboard collects the following metrics from the NebulaGraph core, but only shows the metrics that are important to it. ### Graph diff --git a/docs-2.0/nebula-exchange/about-exchange/ex-ug-limitations.md b/docs-2.0/nebula-exchange/about-exchange/ex-ug-limitations.md index 0d6f1eb870a..a6b52e21aa0 100644 --- a/docs-2.0/nebula-exchange/about-exchange/ex-ug-limitations.md +++ b/docs-2.0/nebula-exchange/about-exchange/ex-ug-limitations.md @@ -4,9 +4,9 @@ This topic describes some of the limitations of using Exchange 3.x. ## Version compatibility -The correspondence between the Nebula Exchange release (the JAR version) and the Nebula Graph core release is as follows. +The correspondence between the Nebula Exchange release (the JAR version) and the NebulaGraph core release is as follows. -|Exchange client|Nebula Graph| +|Exchange client|NebulaGraph| |:---|:---| |3.0-SNAPSHOT|nightly| |{{exchange.release}}|{{nebula.release}}| @@ -18,7 +18,7 @@ The correspondence between the Nebula Exchange release (the JAR version) and the JAR packages are available in two ways: [compile them yourself](../ex-ug-compile.md) or download them from the Maven repository. -If you are using Nebula Graph 1.x, use [Nebula Exchange 1.x](https://github.com/vesoft-inc/nebula-java/tree/v1.0/tools "Click to go to GitHub"). +If you are using NebulaGraph 1.x, use [Nebula Exchange 1.x](https://github.com/vesoft-inc/nebula-java/tree/v1.0/tools "Click to go to GitHub"). ## Environment @@ -55,7 +55,7 @@ To ensure the healthy operation of Exchange, ensure that the following software | MaxCompute | N | Y | N | | Pulsar | N | Y | Untested | | Kafka | N | Y | Untested | - | Nebula Graph | N | Y | N | + | NebulaGraph | N | Y | N | Hadoop Distributed File System (HDFS) needs to be deployed in the following scenarios: diff --git a/docs-2.0/nebula-exchange/about-exchange/ex-ug-what-is-exchange.md b/docs-2.0/nebula-exchange/about-exchange/ex-ug-what-is-exchange.md index d816f312d8e..4bd863935b6 100644 --- a/docs-2.0/nebula-exchange/about-exchange/ex-ug-what-is-exchange.md +++ b/docs-2.0/nebula-exchange/about-exchange/ex-ug-what-is-exchange.md @@ -1,10 +1,10 @@ # What is Nebula Exchange -[Nebula Exchange](https://github.com/vesoft-inc/nebula-exchange) (Exchange) is an Apache Spark™ application for bulk migration of cluster data to Nebula Graph in a distributed environment, supporting batch and streaming data migration in a variety of formats. +[Nebula Exchange](https://github.com/vesoft-inc/nebula-exchange) (Exchange) is an Apache Spark™ application for bulk migration of cluster data to NebulaGraph in a distributed environment, supporting batch and streaming data migration in a variety of formats. -Exchange consists of Reader, Processor, and Writer. After Reader reads data from different sources and returns a DataFrame, the Processor iterates through each row of the DataFrame and obtains the corresponding value based on the mapping between `fields` in the configuration file. After iterating through the number of rows in the specified batch, Writer writes the captured data to the Nebula Graph at once. The following figure illustrates the process by which Exchange completes the data conversion and migration. +Exchange consists of Reader, Processor, and Writer. After Reader reads data from different sources and returns a DataFrame, the Processor iterates through each row of the DataFrame and obtains the corresponding value based on the mapping between `fields` in the configuration file. After iterating through the number of rows in the specified batch, Writer writes the captured data to the NebulaGraph at once. The following figure illustrates the process by which Exchange completes the data conversion and migration. -![Nebula Graph® Exchange consists of Reader, Processor, and Writer that can migrate data from a variety of formats and sources to Nebula Graph](https://docs-cdn.nebula-graph.com.cn/figures/ex-ug-003.png) +![NebulaGraph® Exchange consists of Reader, Processor, and Writer that can migrate data from a variety of formats and sources to NebulaGraph](https://docs-cdn.nebula-graph.com.cn/figures/ex-ug-003.png) ## Editions @@ -14,27 +14,27 @@ Exchange has two editions, the Community Edition and the Enterprise Edition. The Exchange applies to the following scenarios: -- Streaming data from Kafka and Pulsar platforms, such as log files, online shopping data, activities of game players, information on social websites, financial transactions or geospatial services, and telemetry data from connected devices or instruments in the data center, are required to be converted into the vertex or edge data of the property graph and import them into the Nebula Graph database. +- Streaming data from Kafka and Pulsar platforms, such as log files, online shopping data, activities of game players, information on social websites, financial transactions or geospatial services, and telemetry data from connected devices or instruments in the data center, are required to be converted into the vertex or edge data of the property graph and import them into the NebulaGraph database. -- Batch data, such as data from a time period, needs to be read from a relational database (such as MySQL) or a distributed file system (such as HDFS), converted into vertex or edge data for a property graph, and imported into the Nebula Graph database. +- Batch data, such as data from a time period, needs to be read from a relational database (such as MySQL) or a distributed file system (such as HDFS), converted into vertex or edge data for a property graph, and imported into the NebulaGraph database. -- A large volume of data needs to be generated into SST files that Nebula Graph can recognize and then imported into the Nebula Graph database. +- A large volume of data needs to be generated into SST files that NebulaGraph can recognize and then imported into the NebulaGraph database. -- The data saved in Nebula Graph needs to be exported. +- The data saved in NebulaGraph needs to be exported. !!! enterpriseonly - Exporting the data saved in Nebula Graph is supported by Exchange Enterprise Edition only. + Exporting the data saved in NebulaGraph is supported by Exchange Enterprise Edition only. ## Advantages Exchange has the following advantages: -- High adaptability: It supports importing data into the Nebula Graph database in a variety of formats or from a variety of sources, making it easy to migrate data. +- High adaptability: It supports importing data into the NebulaGraph database in a variety of formats or from a variety of sources, making it easy to migrate data. - SST import: It supports converting data from different sources into SST files for data import. -- SSL encryption: It supports establishing the SSL encryption between Exchange and Nebula Graph to ensure data security. +- SSL encryption: It supports establishing the SSL encryption between Exchange and NebulaGraph to ensure data security. - Resumable data import: It supports resumable data import to save time and improve data import efficiency. @@ -52,7 +52,7 @@ Exchange has the following advantages: ## Data source -Exchange {{exchange.release}} supports converting data from the following formats or sources into vertexes and edges that Nebula Graph can recognize, and then importing them into Nebula Graph in the form of nGQL statements: +Exchange {{exchange.release}} supports converting data from the following formats or sources into vertexes and edges that NebulaGraph can recognize, and then importing them into NebulaGraph in the form of nGQL statements: - Data stored in HDFS or locally: - [Apache Parquet](../use-exchange/ex-ug-import-from-parquet.md) @@ -81,7 +81,7 @@ Exchange {{exchange.release}} supports converting data from the following format In addition to importing data as nGQL statements, Exchange supports generating SST files for data sources and then [importing SST](../use-exchange/ex-ug-import-from-sst.md) files via Console. -In addition, Exchange Enterprise Edition also supports [exporting data to a CSV file](../use-exchange/ex-ug-export-from-nebula.md) using Nebula Graph as data sources. +In addition, Exchange Enterprise Edition also supports [exporting data to a CSV file](../use-exchange/ex-ug-export-from-nebula.md) using NebulaGraph as data sources. ## Release note diff --git a/docs-2.0/nebula-exchange/ex-ug-FAQ.md b/docs-2.0/nebula-exchange/ex-ug-FAQ.md index 521a1937249..e4a4264dda9 100644 --- a/docs-2.0/nebula-exchange/ex-ug-FAQ.md +++ b/docs-2.0/nebula-exchange/ex-ug-FAQ.md @@ -74,7 +74,7 @@ Check whether the `-h` parameter is omitted in the command for submitting the Ex ### Q: Run error: `com.facebook.thrift.protocol.TProtocolException: Expected protocol id xxx` -Check that the Nebula Graph service port is configured correctly. +Check that the NebulaGraph service port is configured correctly. - For source, RPM, or DEB installations, configure the port number corresponding to `--port` in the configuration file for each service. @@ -105,9 +105,9 @@ Check that the Nebula Graph service port is configured correctly. ### Q: Error: `Exception in thread "main" com.facebook.thrift.protocol.TProtocolException: The field 'code' has been assigned the invalid value -4` -Check whether the version of Exchange is the same as that of Nebula Graph. For more information, see [Limitations](../nebula-exchange/about-exchange/ex-ug-limitations.md). +Check whether the version of Exchange is the same as that of NebulaGraph. For more information, see [Limitations](../nebula-exchange/about-exchange/ex-ug-limitations.md). -### Q: How to correct the messy code when importing Hive data into Nebula Graph? +### Q: How to correct the messy code when importing Hive data into NebulaGraph? It may happen if the property value of the data in Hive contains Chinese characters. The solution is to add the following options before the JAR package path in the import command: @@ -152,11 +152,11 @@ Solution: ### Q: Which configuration fields will affect import performance? -- batch: The number of data contained in each nGQL statement sent to the Nebula Graph service. +- batch: The number of data contained in each nGQL statement sent to the NebulaGraph service. - partition: The number of Spark data partitions, indicating the number of concurrent data imports. -- nebula.rate: Get a token from the token bucket before sending a request to Nebula Graph. +- nebula.rate: Get a token from the token bucket before sending a request to NebulaGraph. - limit: Represents the size of the token bucket. @@ -166,14 +166,14 @@ The values of these four parameters can be adjusted appropriately according to t ## Others -### Q: Which versions of Nebula Graph are supported by Exchange? +### Q: Which versions of NebulaGraph are supported by Exchange? See [Limitations](about-exchange/ex-ug-limitations.md). ### Q: What is the relationship between Exchange and Spark Writer? -Exchange is the Spark application developed based on Spark Writer. Both are suitable for bulk migration of cluster data to Nebula Graph in a distributed environment, but later maintenance work will be focused on Exchange. Compared with Spark Writer, Exchange has the following improvements: +Exchange is the Spark application developed based on Spark Writer. Both are suitable for bulk migration of cluster data to NebulaGraph in a distributed environment, but later maintenance work will be focused on Exchange. Compared with Spark Writer, Exchange has the following improvements: - It supports more abundant data sources, such as MySQL, Neo4j, Hive, HBase, Kafka, Pulsar, etc. -- It fixed some problems of Spark Writer. For example, when Spark reads data from HDFS, the default source data is String, which may be different from the Nebula Graph's Schema. So Exchange adds automatic data type matching and type conversion. When the data type in the Nebula Graph's Schema is non-String (e.g. double), Exchange converts the source data of String type to the corresponding type. +- It fixed some problems of Spark Writer. For example, when Spark reads data from HDFS, the default source data is String, which may be different from the NebulaGraph's Schema. So Exchange adds automatic data type matching and type conversion. When the data type in the NebulaGraph's Schema is non-String (e.g. double), Exchange converts the source data of String type to the corresponding type. diff --git a/docs-2.0/nebula-exchange/ex-ug-compile.md b/docs-2.0/nebula-exchange/ex-ug-compile.md index dc3b8c7f0c8..e23b439ce5e 100644 --- a/docs-2.0/nebula-exchange/ex-ug-compile.md +++ b/docs-2.0/nebula-exchange/ex-ug-compile.md @@ -6,7 +6,7 @@ This topic introduces how to get the JAR file of Nebula Exchange. The JAR file of Exchange Community Edition can be [downloaded](https://github.com/vesoft-inc/nebula-exchange/releases) directly. -To download Exchange Enterprise Edition, [get Nebula Graph Enterprise Edition Package](https://nebula-graph.io/pricing/) first. +To download Exchange Enterprise Edition, [get NebulaGraph Enterprise Edition Package](https://nebula-graph.io/pricing/) first. ## Get the JAR file by compiling the source code @@ -14,7 +14,7 @@ You can get the JAR file of Exchange Community Edition by compiling the source c !!! enterpriseonly - You can get Exchange Enterprise Edition in Nebula Graph Enterprise Edition Package only. + You can get Exchange Enterprise Edition in NebulaGraph Enterprise Edition Package only. ### Prerequisites diff --git a/docs-2.0/nebula-exchange/parameter-reference/ex-ug-para-import-command.md b/docs-2.0/nebula-exchange/parameter-reference/ex-ug-para-import-command.md index 9ac40252c93..284feb63a32 100644 --- a/docs-2.0/nebula-exchange/parameter-reference/ex-ug-para-import-command.md +++ b/docs-2.0/nebula-exchange/parameter-reference/ex-ug-para-import-command.md @@ -1,6 +1,6 @@ # Options for import -After editing the configuration file, run the following commands to import specified source data into the Nebula Graph database. +After editing the configuration file, run the following commands to import specified source data into the NebulaGraph database. - First import diff --git a/docs-2.0/nebula-exchange/parameter-reference/ex-ug-parameter.md b/docs-2.0/nebula-exchange/parameter-reference/ex-ug-parameter.md index d7da1748709..bf3406eab67 100644 --- a/docs-2.0/nebula-exchange/parameter-reference/ex-ug-parameter.md +++ b/docs-2.0/nebula-exchange/parameter-reference/ex-ug-parameter.md @@ -10,7 +10,7 @@ The `application.conf` file contains the following content types: - Hive configurations (optional) -- Nebula Graph configurations +- NebulaGraph configurations - Vertex configurations @@ -40,13 +40,13 @@ Users only need to configure parameters for connecting to Hive if Spark and Hive |`hive.connectionUserName`|list\[string\]|-|Yes|The username for connections.| |`hive.connectionPassword`|list\[string\]|-|Yes|The account password.| -## Nebula Graph configurations +## NebulaGraph configurations |Parameter|Type|Default value|Required|Description| |:---|:---|:---|:---|:---| |`nebula.address.graph`|list\[string\]|`["127.0.0.1:9669"]`|Yes|The addresses of all Graph services, including IPs and ports, separated by commas (,). Example: `["ip1:port1","ip2:port2","ip3:port3"]`.| |`nebula.address.meta`|list\[string\]|`["127.0.0.1:9559"]`|Yes|The addresses of all Meta services, including IPs and ports, separated by commas (,). Example: `["ip1:port1","ip2:port2","ip3:port3"]`.| -|`nebula.user`|string|-|Yes|The username with write permissions for Nebula Graph.| +|`nebula.user`|string|-|Yes|The username with write permissions for NebulaGraph.| |`nebula.pswd`|string|-|Yes|The account password.| |`nebula.space`|string|-|Yes|The name of the graph space where data needs to be imported.| |`nebula.ssl.enable.graph`|bool|`false`|Yes|Enables the [SSL encryption](https://en.wikipedia.org/wiki/Transport_Layer_Security) between Exchange and Graph services. If the value is `true`, the SSL encryption is enabled and the following SSL parameters take effect. If Exchange is run on a multi-machine cluster, you need to store the corresponding files in the same path on each machine when setting the following SSL-related paths.| @@ -76,13 +76,13 @@ For different data sources, the vertex configurations are different. There are m |Parameter|Type|Default value|Required|Description| |:---|:---|:---|:---|:---| -|`tags.name`|string|-|Yes|The tag name defined in Nebula Graph.| +|`tags.name`|string|-|Yes|The tag name defined in NebulaGraph.| |`tags.type.source`|string|-|Yes|Specify a data source. For example, `csv`.| |`tags.type.sink`|string|`client`|Yes|Specify an import method. Optional values are `client` and `SST`.| |`tags.fields`|list\[string\]|-|Yes|The header or column name of the column corresponding to properties. If there is a header or a column name, please use that name directly. If a CSV file does not have a header, use the form of `[_c0, _c1, _c2]` to represent the first column, the second column, the third column, and so on.| -|`tags.nebula.fields`|list\[string\]|-|Yes|Property names defined in Nebula Graph, the order of which must correspond to `tags.fields`. For example, `[_c1, _c2]` corresponds to `[name, age]`, which means that values in the second column are the values of the property `name`, and values in the third column are the values of the property `age`.| +|`tags.nebula.fields`|list\[string\]|-|Yes|Property names defined in NebulaGraph, the order of which must correspond to `tags.fields`. For example, `[_c1, _c2]` corresponds to `[name, age]`, which means that values in the second column are the values of the property `name`, and values in the third column are the values of the property `age`.| |`tags.vertex.field`|string|-|Yes|The column of vertex IDs. For example, when a CSV file has no header, users can use `_c0` to indicate values in the first column are vertex IDs.| -|`tags.batch`|int|`256`|Yes|The maximum number of vertices written into Nebula Graph in a single batch.| +|`tags.batch`|int|`256`|Yes|The maximum number of vertices written into NebulaGraph in a single batch.| |`tags.partition`|int|`32`|Yes|The number of Spark partitions.| ### Specific parameters of Parquet/JSON/ORC data sources @@ -182,13 +182,13 @@ For different data sources, the vertex configurations are different. There are m |Parameter|Type|Default value|Required|Description| |:---|:---|:---|:---|:---| |`tags.path`|string|-|Yes|The path of the source file specified to generate SST files.| -|`tags.repartitionWithNebula`|bool|`false`|No|Whether to repartition data based on the number of partitions of graph spaces in Nebula Graph when generating the SST file. Enabling this function can reduce the time required to DOWNLOAD and INGEST SST files. If the number of the partition (partition_num) in the graph space is greater than `1`, set the parameter to `true`, otherwise, the generated data file may only contain vertices without tags.| +|`tags.repartitionWithNebula`|bool|`false`|No|Whether to repartition data based on the number of partitions of graph spaces in NebulaGraph when generating the SST file. Enabling this function can reduce the time required to DOWNLOAD and INGEST SST files. If the number of the partition (partition_num) in the graph space is greater than `1`, set the parameter to `true`, otherwise, the generated data file may only contain vertices without tags.| -### Specific parameters of Nebula Graph +### Specific parameters of NebulaGraph !!! enterpriseonly - Specific parameters of Nebula Graph are used for exporting Nebula Graph data, which is supported by Exchange Enterprise Edition only. + Specific parameters of NebulaGraph are used for exporting NebulaGraph data, which is supported by Exchange Enterprise Edition only. |Parameter|Data type|Default value|Required|Description| |:---|:---|:---|:---|:---| @@ -206,15 +206,15 @@ For the specific parameters of different data sources for edge configurations, p |Parameter|Type|Default value|Required|Description| |:---|:---|:---|:---|:---| -|`edges.name`| string|-|Yes|The edge type name defined in Nebula Graph.| +|`edges.name`| string|-|Yes|The edge type name defined in NebulaGraph.| |`edges.type.source`|string|-|Yes|The data source of edges. For example, `csv`.| |`edges.type.sink`|string|`client`|Yes|The method specified to import data. Optional values are `client` and `SST`.| |`edges.fields`|list\[string\]|-|Yes|The header or column name of the column corresponding to properties. If there is a header or column name, please use that name directly. If a CSV file does not have a header, use the form of `[_c0, _c1, _c2]` to represent the first column, the second column, the third column, and so on.| -|`edges.nebula.fields`|list\[string\]|-|Yes|Edge names defined in Nebula Graph, the order of which must correspond to `edges.fields`. For example, `[_c2, _c3]` corresponds to `[start_year, end_year]`, which means that values in the third column are the values of the start year, and values in the fourth column are the values of the end year.| +|`edges.nebula.fields`|list\[string\]|-|Yes|Edge names defined in NebulaGraph, the order of which must correspond to `edges.fields`. For example, `[_c2, _c3]` corresponds to `[start_year, end_year]`, which means that values in the third column are the values of the start year, and values in the fourth column are the values of the end year.| |`edges.source.field`|string|-|Yes|The column of source vertices of edges. For example, `_c0` indicates a value in the first column that is used as the source vertex of an edge.| |`edges.target.field`|string|-|Yes|The column of destination vertices of edges. For example, `_c0` indicates a value in the first column that is used as the destination vertex of an edge.| |`edges.ranking`|int|-|No|The column of rank values. If not specified, all rank values are `0` by default.| -|`edges.batch`|int|`256`|Yes|The maximum number of edges written into Nebula Graph in a single batch.| +|`edges.batch`|int|`256`|Yes|The maximum number of edges written into NebulaGraph in a single batch.| |`edges.partition`|int|`32`|Yes|The number of Spark partitions.| ### Specific parameters for generating SST files @@ -222,9 +222,9 @@ For the specific parameters of different data sources for edge configurations, p |Parameter|Type|Default value|Required|Description| |:---|:---|:---|:---|:---| |`edges.path`|string|-|Yes|The path of the source file specified to generate SST files.| -|`edges.repartitionWithNebula`|bool|`false`|No|Whether to repartition data based on the number of partitions of graph spaces in Nebula Graph when generating the SST file. Enabling this function can reduce the time required to DOWNLOAD and INGEST SST files.| +|`edges.repartitionWithNebula`|bool|`false`|No|Whether to repartition data based on the number of partitions of graph spaces in NebulaGraph when generating the SST file. Enabling this function can reduce the time required to DOWNLOAD and INGEST SST files.| -### Specific parameters of Nebula Graph +### Specific parameters of NebulaGraph |Parameter|Type|Default value|Required|Description| |:---|:---|:---|:---|:---| diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-export-from-nebula.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-export-from-nebula.md index f1ffcc2b114..b6bbd818784 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-export-from-nebula.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-export-from-nebula.md @@ -1,14 +1,14 @@ -# Export data from Nebula Graph +# Export data from NebulaGraph -This topic uses an example to illustrate how to use Exchange to export data from Nebula Graph to a CSV file. +This topic uses an example to illustrate how to use Exchange to export data from NebulaGraph to a CSV file. !!! enterpriseonly - Only Exchange Enterprise Edition supports exporting data from Nebula Graph to a CSV file. + Only Exchange Enterprise Edition supports exporting data from NebulaGraph to a CSV file. !!! note - SSL encryption is not supported when exporting data from Nebula Graph. + SSL encryption is not supported when exporting data from NebulaGraph. ## Preparation @@ -34,11 +34,11 @@ CentOS 7.9.2009 | Hadoop | 2.10.1 | | Scala | 2.12.11 | | Spark | 2.4.7 | -| Nebula Graph | {{nebula.release}} | +| NebulaGraph | {{nebula.release}} | ### Dataset -As the data source, Nebula Graph stores the [basketballplayer dataset](https://docs.nebula-graph.io/2.0/basketballplayer-2.X.ngql) in this example, the Schema elements of which are shown as follows. +As the data source, NebulaGraph stores the [basketballplayer dataset](https://docs.nebula-graph.io/2.0/basketballplayer-2.X.ngql) in this example, the Schema elements of which are shown as follows. | Element | Name | Property | | :--- | :--- | :--- | @@ -49,11 +49,11 @@ As the data source, Nebula Graph stores the [basketballplayer dataset](https://d ## Steps -1. Get the JAR file of Exchange Enterprise Edition from the [Nebula Graph Enterprise Edition Package](https://nebula-graph.com.cn/pricing/). +1. Get the JAR file of Exchange Enterprise Edition from the [NebulaGraph Enterprise Edition Package](https://nebula-graph.com.cn/pricing/). 2. Modify the configuration file. - Exchange Enterprise Edition provides the configuration template `export_application.conf` for exporting Nebula Graph data. For details, see [Exchange parameters](../parameter-reference/ex-ug-parameter.md). The core content of the configuration file used in this example is as follows: + Exchange Enterprise Edition provides the configuration template `export_application.conf` for exporting NebulaGraph data. For details, see [Exchange parameters](../parameter-reference/ex-ug-parameter.md). The core content of the configuration file used in this example is as follows: ```conf ... @@ -112,7 +112,7 @@ As the data source, Nebula Graph stores the [basketballplayer dataset](https://d } ``` -3. Export data from Nebula Graph with the following command. +3. Export data from NebulaGraph with the following command. ```bash /bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange nebula-exchange-x.y.z.jar_path> -c diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-clickhouse.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-clickhouse.md index 5153d754fcc..14c84ffc628 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-clickhouse.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-clickhouse.md @@ -1,6 +1,6 @@ # Import data from ClickHouse -This topic provides an example of how to use Exchange to import data stored on ClickHouse into Nebula Graph. +This topic provides an example of how to use Exchange to import data stored on ClickHouse into NebulaGraph. ## Data set @@ -20,33 +20,33 @@ This example is done on MacOS. Here is the environment configuration information - ClickHouse: docker deployment yandex/clickhouse-server tag: latest(2021.07.01) -- Nebula Graph: {{nebula.release}}. [Deploy Nebula Graph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). +- NebulaGraph: {{nebula.release}}. [Deploy NebulaGraph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). ## Prerequisites Before importing data, you need to confirm the following information: -- Nebula Graph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: +- NebulaGraph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: - IP addresses and ports of Graph and Meta services. - - The user name and password with write permission to Nebula Graph. + - The user name and password with write permission to NebulaGraph. - Exchange has been [compiled](../ex-ug-compile.md), or [download](https://repo1.maven.org/maven2/com/vesoft/nebula-exchange/) the compiled `.jar` file directly. - Spark has been installed. -- Learn about the Schema created in Nebula Graph, including names and properties of Tags and Edge types, and more. +- Learn about the Schema created in NebulaGraph, including names and properties of Tags and Edge types, and more. - The Hadoop service has been installed and started. ## Steps -### Step 1: Create the Schema in Nebula Graph +### Step 1: Create the Schema in NebulaGraph -Analyze the data to create a Schema in Nebula Graph by following these steps: +Analyze the data to create a Schema in NebulaGraph by following these steps: -1. Identify the Schema elements. The Schema elements in the Nebula Graph are shown in the following table. +1. Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table. | Element | Name | Property | | :--- | :--- | :--- | @@ -55,7 +55,7 @@ Analyze the data to create a Schema in Nebula Graph by following these steps: | Edge Type | `follow` | `degree int` | | Edge Type | `serve` | `start_year int, end_year int` | -2. Create a graph space **basketballplayer** in the Nebula Graph and create a Schema as shown below. +2. Create a graph space **basketballplayer** in the NebulaGraph and create a Schema as shown below. ```ngql ## Create a graph space. @@ -102,7 +102,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } } -# Nebula Graph configuration +# NebulaGraph configuration nebula: { address:{ # Specify the IP addresses and ports for Graph and Meta services. @@ -111,10 +111,10 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` graph:["127.0.0.1:9669"] meta:["127.0.0.1:9559"] } - # The account entered must have write permission for the Nebula Graph space. + # The account entered must have write permission for the NebulaGraph space. user: root pswd: nebula - # Fill in the name of the graph space you want to write data to in the Nebula Graph. + # Fill in the name of the graph space you want to write data to in the NebulaGraph. space: basketballplayer connection: { timeout: 3000 @@ -140,7 +140,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` type: { # Specify the data source file format to ClickHouse. source: clickhouse - # Specify how to import the data of vertexes into Nebula Graph: Client or SST. + # Specify how to import the data of vertexes into NebulaGraph: Client or SST. sink: client } @@ -155,19 +155,19 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` sentence:"select * from player" - # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. # If multiple column names need to be specified, separate them by commas. fields: [name,age] nebula.fields: [name,age] - # Specify a column of data in the table as the source of vertex VID in the Nebula Graph. + # Specify a column of data in the table as the source of vertex VID in the NebulaGraph. vertex: { field:playerid # policy:hash } - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -200,14 +200,14 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` edges: [ # Set the information about the Edge Type follow. { - # The corresponding Edge Type name in Nebula Graph. + # The corresponding Edge Type name in NebulaGraph. name: follow type: { # Specify the data source file format to ClickHouse. source: clickhouse - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -222,7 +222,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` sentence:"select * from follow" - # Specify the column names in the follow table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the follow table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. # If multiple column names need to be specified, separate them by commas. fields: [degree] @@ -241,7 +241,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # (Optional) Specify a column as the source of the rank. #ranking: rank - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -279,9 +279,9 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } ``` -### Step 3: Import data into Nebula Graph +### Step 3: Import data into NebulaGraph -Run the following command to import ClickHouse data into Nebula Graph. For descriptions of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). +Run the following command to import ClickHouse data into NebulaGraph. For descriptions of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). ```bash ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange -c @@ -301,7 +301,7 @@ You can search for `batchSuccess.` in the command output to ### Step 4: (optional) Validate data -Users can verify that data has been imported by executing a query in the Nebula Graph client (for example, Nebula Studio). For example: +Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, Nebula Studio). For example: ```ngql GO FROM "player100" OVER follow; @@ -309,6 +309,6 @@ GO FROM "player100" OVER follow; Users can also run the [SHOW STATS](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics. -### Step 5: (optional) Rebuild indexes in Nebula Graph +### Step 5: (optional) Rebuild indexes in NebulaGraph -With the data imported, users can recreate and rebuild indexes in Nebula Graph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). +With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-csv.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-csv.md index b8f1b277c5f..3b513c470ad 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-csv.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-csv.md @@ -1,8 +1,8 @@ # Import data from CSV files -This topic provides an example of how to use Exchange to import Nebula Graph data stored in HDFS or local CSV files. +This topic provides an example of how to use Exchange to import NebulaGraph data stored in HDFS or local CSV files. -To import a local CSV file to Nebula Graph, see [Nebula Importer](https://github.com/vesoft-inc/nebula-importer "Click to go to GitHub"). +To import a local CSV file to NebulaGraph, see [Nebula Importer](https://github.com/vesoft-inc/nebula-importer "Click to go to GitHub"). ## Data set @@ -20,35 +20,35 @@ This example is done on MacOS. Here is the environment configuration information - Hadoop: 2.9.2, pseudo-distributed deployment -- Nebula Graph: {{nebula.release}}. [Deploy Nebula Graph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). +- NebulaGraph: {{nebula.release}}. [Deploy NebulaGraph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). ## Prerequisites Before importing data, you need to confirm the following information: -- Nebula Graph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: +- NebulaGraph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: - IP addresses and ports of Graph and Meta services. - - The user name and password with write permission to Nebula Graph. + - The user name and password with write permission to NebulaGraph. - Exchange has been [compiled](../ex-ug-compile.md), or [download](https://repo1.maven.org/maven2/com/vesoft/nebula-exchange/) the compiled `.jar` file directly. - Spark has been installed. -- Learn about the Schema created in Nebula Graph, including names and properties of Tags and Edge types, and more. +- Learn about the Schema created in NebulaGraph, including names and properties of Tags and Edge types, and more. - If files are stored in HDFS, ensure that the Hadoop service is running normally. -- If files are stored locally and Nebula Graph is a cluster architecture, you need to place the files in the same directory locally on each machine in the cluster. +- If files are stored locally and NebulaGraph is a cluster architecture, you need to place the files in the same directory locally on each machine in the cluster. ## Steps -### Step 1: Create the Schema in Nebula Graph +### Step 1: Create the Schema in NebulaGraph -Analyze the data to create a Schema in Nebula Graph by following these steps: +Analyze the data to create a Schema in NebulaGraph by following these steps: -1. Identify the Schema elements. The Schema elements in the Nebula Graph are shown in the following table. +1. Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table. | Element | Name | Property | | :--- | :--- | :--- | @@ -57,7 +57,7 @@ Analyze the data to create a Schema in Nebula Graph by following these steps: | Edge Type | `follow` | `degree int` | | Edge Type | `serve` | `start_year int, end_year int` | -2. Create a graph space **basketballplayer** in the Nebula Graph and create a Schema as shown below. +2. Create a graph space **basketballplayer** in the NebulaGraph and create a Schema as shown below. ```ngql ## Create a graph space. @@ -120,7 +120,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } } - # Nebula Graph configuration + # NebulaGraph configuration nebula: { address:{ # Specify the IP addresses and ports for Graph and Meta services. @@ -130,11 +130,11 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` meta:["127.0.0.1:9559"] } - # The account entered must have write permission for the Nebula Graph space. + # The account entered must have write permission for the NebulaGraph space. user: root pswd: nebula - # Fill in the name of the graph space you want to write data to in the Nebula Graph. + # Fill in the name of the graph space you want to write data to in the NebulaGraph. space: basketballplayer connection: { timeout: 3000 @@ -157,13 +157,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` tags: [ # Set the information about the Tag player. { - # Specify the Tag name defined in Nebula Graph. + # Specify the Tag name defined in NebulaGraph. name: player type: { # Specify the data source file format to CSV. source: csv - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -176,13 +176,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file has headers, use the actual column names. fields: [_c1, _c2] - # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [age, name] - # Specify a column of data in the table as the source of vertex VID in the Nebula Graph. + # Specify a column of data in the table as the source of vertex VID in the NebulaGraph. # The value of vertex must be the same as the column names in the above fields or csv.fields. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. vertex: { field:_c0 # policy:hash @@ -195,7 +195,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file does not have a header, set the header to false. The default value is false. header: false - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -204,13 +204,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # Set the information about the Tag Team. { - # Specify the Tag name defined in Nebula Graph. + # Specify the Tag name defined in NebulaGraph. name: team type: { # Specify the data source file format to CSV. source: csv - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -223,13 +223,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file has headers, use the actual column names. fields: [_c1] - # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [name] - # Specify a column of data in the table as the source of VIDs in the Nebula Graph. + # Specify a column of data in the table as the source of VIDs in the NebulaGraph. # The value of vertex must be the same as the column names in the above fields or csv.fields. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. vertex: { field:_c0 # policy:hash @@ -242,7 +242,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file does not have a header, set the header to false. The default value is false. header: false - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -256,13 +256,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` edges: [ # Set the information about the Edge Type follow. { - # Specify the Edge Type name defined in Nebula Graph. + # Specify the Edge Type name defined in NebulaGraph. name: follow type: { # Specify the data source file format to CSV. source: csv - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -275,13 +275,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file has headers, use the actual column names. fields: [_c2] - # Specify the column names in the edge table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the edge table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [degree] # Specify a column as the source for the source and destination vertexes. # The value of vertex must be the same as the column names in the above fields or csv.fields. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. source: { field: _c0 } @@ -300,7 +300,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file does not have a header, set the header to false. The default value is false. header: false - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -309,13 +309,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # Set the information about the Edge Type serve. { - # Specify the Edge Type name defined in Nebula Graph. + # Specify the Edge Type name defined in NebulaGraph. name: serve type: { # Specify the data source file format to CSV. source: csv - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -328,13 +328,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file has headers, use the actual column names. fields: [_c2,_c3] - # Specify the column names in the edge table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the edge table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [start_year, end_year] # Specify a column as the source for the source and destination vertexes. # The value of vertex must be the same as the column names in the above fields or csv.fields. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. source: { field: _c0 } @@ -352,7 +352,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file does not have a header, set the header to false. The default value is false. header: false - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -364,9 +364,9 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } ``` -### Step 4: Import data into Nebula Graph +### Step 4: Import data into NebulaGraph -Run the following command to import CSV data into Nebula Graph. For descriptions of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). +Run the following command to import CSV data into NebulaGraph. For descriptions of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). ```bash ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange -c @@ -386,7 +386,7 @@ You can search for `batchSuccess.` in the command output to ### Step 5: (optional) Validate data -Users can verify that data has been imported by executing a query in the Nebula Graph client (for example, Nebula Studio). For example: +Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, Nebula Studio). For example: ```ngql GO FROM "player100" OVER follow; @@ -394,6 +394,6 @@ GO FROM "player100" OVER follow; Users can also run the [`SHOW STATS`](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics. -### Step 6: (optional) Rebuild indexes in Nebula Graph +### Step 6: (optional) Rebuild indexes in NebulaGraph -With the data imported, users can recreate and rebuild indexes in Nebula Graph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). +With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-hbase.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-hbase.md index b8b9f74ffb8..447ca994523 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-hbase.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-hbase.md @@ -1,6 +1,6 @@ # Import data from HBase -This topic provides an example of how to use Exchange to import Nebula Graph data stored in HBase. +This topic provides an example of how to use Exchange to import NebulaGraph data stored in HBase. ## Data set @@ -57,33 +57,33 @@ This example is done on MacOS. Here is the environment configuration information - HBase: 2.2.7 -- Nebula Graph: {{nebula.release}}. [Deploy Nebula Graph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). +- NebulaGraph: {{nebula.release}}. [Deploy NebulaGraph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). ## Prerequisites Before importing data, you need to confirm the following information: -- Nebula Graph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: +- NebulaGraph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: - IP addresses and ports of Graph and Meta services. - - The user name and password with write permission to Nebula Graph. + - The user name and password with write permission to NebulaGraph. - Exchange has been [compiled](../ex-ug-compile.md), or [download](https://repo1.maven.org/maven2/com/vesoft/nebula-exchange/) the compiled `.jar` file directly. - Spark has been installed. -- Learn about the Schema created in Nebula Graph, including names and properties of Tags and Edge types, and more. +- Learn about the Schema created in NebulaGraph, including names and properties of Tags and Edge types, and more. - The Hadoop service has been installed and started. ## Steps -### Step 1: Create the Schema in Nebula Graph +### Step 1: Create the Schema in NebulaGraph -Analyze the data to create a Schema in Nebula Graph by following these steps: +Analyze the data to create a Schema in NebulaGraph by following these steps: -1. Identify the Schema elements. The Schema elements in the Nebula Graph are shown in the following table. +1. Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table. | Element | Name | Property | | :--- | :--- | :--- | @@ -92,7 +92,7 @@ Analyze the data to create a Schema in Nebula Graph by following these steps: | Edge Type | `follow` | `degree int` | | Edge Type | `serve` | `start_year int, end_year int` | -2. Create a graph space **basketballplayer** in the Nebula Graph and create a Schema as shown below. +2. Create a graph space **basketballplayer** in the NebulaGraph and create a Schema as shown below. ```ngql ## Create a graph space. @@ -140,7 +140,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } - # Nebula Graph configuration + # NebulaGraph configuration nebula: { address:{ # Specify the IP addresses and ports for Graph and all Meta services. @@ -149,10 +149,10 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` graph:["127.0.0.1:9669"] meta:["127.0.0.1:9559"] } - # The account entered must have write permission for the Nebula Graph space. + # The account entered must have write permission for the NebulaGraph space. user: root pswd: nebula - # Fill in the name of the graph space you want to write data to in the Nebula Graph. + # Fill in the name of the graph space you want to write data to in the NebulaGraph. space: basketballplayer connection: { timeout: 3000 @@ -175,12 +175,12 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # Set information about Tag player. # If you want to set RowKey as the data source, enter rowkey and the actual column name of the column family. { - # The Tag name in Nebula Graph. + # The Tag name in NebulaGraph. name: player type: { # Specify the data source file format to HBase. source: hbase - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } host:192.168.*.* @@ -188,20 +188,20 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` table:"player" columnFamily:"cf" - # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. # If multiple column names need to be specified, separate them by commas. fields: [age,name] nebula.fields: [age,name] - # Specify a column of data in the table as the source of vertex VID in the Nebula Graph. + # Specify a column of data in the table as the source of vertex VID in the NebulaGraph. # For example, if rowkey is the source of the VID, enter rowkey. vertex:{ field:rowkey } - # Number of pieces of data written to Nebula Graph in a single batch. + # Number of pieces of data written to NebulaGraph in a single batch. batch: 256 # Number of Spark partitions @@ -233,15 +233,15 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` edges: [ # Set the information about the Edge Type follow. { - # The corresponding Edge Type name in Nebula Graph. + # The corresponding Edge Type name in NebulaGraph. name: follow type: { # Specify the data source file format to HBase. source: hbase - # Specify how to import the Edge type data into Nebula Graph. - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the Edge type data into NebulaGraph. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -250,7 +250,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` table:"follow" columnFamily:"cf" - # Specify the column names in the follow table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the follow table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. # If multiple column names need to be specified, separate them by commas. fields: [degree] @@ -270,7 +270,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # (Optional) Specify a column as the source of the rank. #ranking: rank - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -309,9 +309,9 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } ``` -### Step 3: Import data into Nebula Graph +### Step 3: Import data into NebulaGraph -Run the following command to import HBase data into Nebula Graph. For descriptions of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). +Run the following command to import HBase data into NebulaGraph. For descriptions of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). ```bash ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange -c @@ -331,7 +331,7 @@ You can search for `batchSuccess.` in the command output to ### Step 4: (optional) Validate data -Users can verify that data has been imported by executing a query in the Nebula Graph client (for example, Nebula Studio). For example: +Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, Nebula Studio). For example: ```ngql GO FROM "player100" OVER follow; @@ -339,6 +339,6 @@ GO FROM "player100" OVER follow; Users can also run the [SHOW STATS](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics. -### Step 5: (optional) Rebuild indexes in Nebula Graph +### Step 5: (optional) Rebuild indexes in NebulaGraph -With the data imported, users can recreate and rebuild indexes in Nebula Graph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). +With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-hive.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-hive.md index 6a3bbdc1df8..1f1bd0256da 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-hive.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-hive.md @@ -1,6 +1,6 @@ # Import data from Hive -This topic provides an example of how to use Exchange to import Nebula Graph data stored in Hive. +This topic provides an example of how to use Exchange to import NebulaGraph data stored in Hive. ## Data set @@ -48,7 +48,7 @@ scala> spark.sql("describe basketball.serve").show !!! note - The Hive data type `bigint` corresponds to the Nebula Graph `int`. + The Hive data type `bigint` corresponds to the NebulaGraph `int`. ## Environment @@ -64,33 +64,33 @@ This example is done on MacOS. Here is the environment configuration information - Hive: 2.3.7, Hive Metastore database is MySQL 8.0.22 -- Nebula Graph: {{nebula.release}}. [Deploy Nebula Graph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). +- NebulaGraph: {{nebula.release}}. [Deploy NebulaGraph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). ## Prerequisites Before importing data, you need to confirm the following information: -- Nebula Graph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: +- NebulaGraph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: - IP addresses and ports of Graph and Meta services. - - The user name and password with write permission to Nebula Graph. + - The user name and password with write permission to NebulaGraph. - Exchange has been [compiled](../ex-ug-compile.md), or [download](https://repo1.maven.org/maven2/com/vesoft/nebula-exchange/) the compiled `.jar` file directly. - Spark has been installed. -- Learn about the Schema created in Nebula Graph, including names and properties of Tags and Edge types, and more. +- Learn about the Schema created in NebulaGraph, including names and properties of Tags and Edge types, and more. - Hadoop has been installed and started, and the Hive Metastore database (MySQL in this example) has been started. ## Steps -### Step 1: Create the Schema in Nebula Graph +### Step 1: Create the Schema in NebulaGraph -Analyze the data to create a Schema in Nebula Graph by following these steps: +Analyze the data to create a Schema in NebulaGraph by following these steps: -1. Identify the Schema elements. The Schema elements in the Nebula Graph are shown in the following table. +1. Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table. | Element | Name | Property | | :--- | :--- | :--- | @@ -99,7 +99,7 @@ Analyze the data to create a Schema in Nebula Graph by following these steps: | Edge Type | `follow` | `degree int` | | Edge Type | `serve` | `start_year int, end_year int` | -2. Create a graph space **basketballplayer** in the Nebula Graph and create a Schema as shown below. +2. Create a graph space **basketballplayer** in the NebulaGraph and create a Schema as shown below. ```ngql ## Create a graph space @@ -181,7 +181,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # connectionPassword: "password" #} - # Nebula Graph configuration + # NebulaGraph configuration nebula: { address:{ # Specify the IP addresses and ports for Graph and all Meta services. @@ -190,10 +190,10 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` graph:["127.0.0.1:9669"] meta:["127.0.0.1:9559"] } - # The account entered must have write permission for the Nebula Graph space. + # The account entered must have write permission for the NebulaGraph space. user: root pswd: nebula - # Fill in the name of the graph space you want to write data to in the Nebula Graph. + # Fill in the name of the graph space you want to write data to in the NebulaGraph. space: basketballplayer connection: { timeout: 3000 @@ -215,30 +215,30 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` tags: [ # Set the information about the Tag player. { - # The Tag name in Nebula Graph. + # The Tag name in NebulaGraph. name: player type: { # Specify the data source file format to Hive. source: hive - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } # Set the SQL statement to read the data of player table in basketball database. exec: "select playerid, age, name from basketball.player" - # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. # If multiple column names need to be specified, separate them by commas. fields: [age,name] nebula.fields: [age,name] - # Specify a column of data in the table as the source of vertex VID in the Nebula Graph. + # Specify a column of data in the table as the source of vertex VID in the NebulaGraph. vertex:{ field:playerid } - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -267,22 +267,22 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` edges: [ # Set the information about the Edge Type follow. { - # The corresponding Edge Type name in Nebula Graph. + # The corresponding Edge Type name in NebulaGraph. name: follow type: { # Specify the data source file format to Hive. source: hive - # Specify how to import the Edge type data into Nebula Graph. - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the Edge type data into NebulaGraph. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } # Set the SQL statement to read the data of follow table in the basketball database. exec: "select src_player, dst_player, degree from basketball.follow" - # Specify the column names in the follow table in Fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the follow table in Fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. # If multiple column names need to be specified, separate them by commas. fields: [degree] @@ -301,7 +301,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # (Optional) Specify a column as the source of the rank. #ranking: rank - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -335,9 +335,9 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } ``` -### Step 4: Import data into Nebula Graph +### Step 4: Import data into NebulaGraph -Run the following command to import Hive data into Nebula Graph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). +Run the following command to import Hive data into NebulaGraph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). ```bash ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange -c -h @@ -357,7 +357,7 @@ You can search for `batchSuccess.` in the command output to ### Step 5: (optional) Validate data -Users can verify that data has been imported by executing a query in the Nebula Graph client (for example, Nebula Studio). For example: +Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, Nebula Studio). For example: ```ngql GO FROM "player100" OVER follow; @@ -365,6 +365,6 @@ GO FROM "player100" OVER follow; Users can also run the [SHOW STATS](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics. -### Step 6: (optional) Rebuild indexes in Nebula Graph +### Step 6: (optional) Rebuild indexes in NebulaGraph -With the data imported, users can recreate and rebuild indexes in Nebula Graph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). +With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-json.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-json.md index 96c8bb378c6..4a68b3b19d2 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-json.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-json.md @@ -1,6 +1,6 @@ # Import data from JSON files -This topic provides an example of how to use Exchange to import Nebula Graph data stored in HDFS or local JSON files. +This topic provides an example of how to use Exchange to import NebulaGraph data stored in HDFS or local JSON files. ## Data set @@ -52,35 +52,35 @@ This example is done on MacOS. Here is the environment configuration information - Hadoop: 2.9.2, pseudo-distributed deployment -- Nebula Graph: {{nebula.release}}. [Deploy Nebula Graph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). +- NebulaGraph: {{nebula.release}}. [Deploy NebulaGraph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). ## Prerequisites Before importing data, you need to confirm the following information: -- Nebula Graph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: +- NebulaGraph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: - IP addresses and ports of Graph and Meta services. - - The user name and password with write permission to Nebula Graph. + - The user name and password with write permission to NebulaGraph. - Exchange has been [compiled](../ex-ug-compile.md), or [download](https://repo1.maven.org/maven2/com/vesoft/nebula-exchange/) the compiled `.jar` file directly. - Spark has been installed. -- Learn about the Schema created in Nebula Graph, including names and properties of Tags and Edge types, and more. +- Learn about the Schema created in NebulaGraph, including names and properties of Tags and Edge types, and more. - If files are stored in HDFS, ensure that the Hadoop service is running properly. -- If files are stored locally and Nebula Graph is a cluster architecture, you need to place the files in the same directory locally on each machine in the cluster. +- If files are stored locally and NebulaGraph is a cluster architecture, you need to place the files in the same directory locally on each machine in the cluster. ## Steps -### Step 1: Create the Schema in Nebula Graph +### Step 1: Create the Schema in NebulaGraph -Analyze the data to create a Schema in Nebula Graph by following these steps: +Analyze the data to create a Schema in NebulaGraph by following these steps: -1. Identify the Schema elements. The Schema elements in the Nebula Graph are shown in the following table. +1. Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table. | Element | Name | Property | | :--- | :--- | :--- | @@ -89,7 +89,7 @@ Analyze the data to create a Schema in Nebula Graph by following these steps: | Edge Type | `follow` | `degree int` | | Edge Type | `serve` | `start_year int, end_year int` | -2. Create a graph space **basketballplayer** in the Nebula Graph and create a Schema as shown below. +2. Create a graph space **basketballplayer** in the NebulaGraph and create a Schema as shown below. ```ngql ## Create a graph space. @@ -148,7 +148,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } } - # Nebula Graph configuration + # NebulaGraph configuration nebula: { address:{ # Specify the IP addresses and ports for Graph and all Meta services. @@ -158,11 +158,11 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` meta:["127.0.0.1:9559"] } - # The account entered must have write permission for the Nebula Graph space. + # The account entered must have write permission for the NebulaGraph space. user: root pswd: nebula - # Fill in the name of the graph space you want to write data to in the Nebula Graph. + # Fill in the name of the graph space you want to write data to in the NebulaGraph. space: basketballplayer connection: { timeout: 3000 @@ -185,13 +185,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` tags: [ # Set the information about the Tag player. { - # Specify the Tag name defined in Nebula Graph. + # Specify the Tag name defined in NebulaGraph. name: player type: { # Specify the data source file format to JSON. source: json - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -200,22 +200,22 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.json". path: "hdfs://192.168.*.*:9000/data/vertex_player.json" - # Specify the key name in the JSON file in fields, and its corresponding value will serve as the data source for the properties specified in the Nebula Graph. + # Specify the key name in the JSON file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph. # If multiple column names need to be specified, separate them by commas. fields: [age,name] - # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [age, name] - # Specify a column of data in the table as the source of vertex VID in the Nebula Graph. + # Specify a column of data in the table as the source of vertex VID in the NebulaGraph. # The value of vertex must be the same as that in the JSON file. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. vertex: { field:id } - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -224,13 +224,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # Set the information about the Tag Team. { - # Specify the Tag name defined in Nebula Graph. + # Specify the Tag name defined in NebulaGraph. name: team type: { # Specify the data source file format to JSON. source: json - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -239,23 +239,23 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.json". path: "hdfs://192.168.*.*:9000/data/vertex_team.json" - # Specify the key name in the JSON file in fields, and its corresponding value will serve as the data source for the properties specified in the Nebula Graph. + # Specify the key name in the JSON file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph. # If multiple column names need to be specified, separate them by commas. fields: [name] - # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [name] - # Specify a column of data in the table as the source of vertex VID in the Nebula Graph. + # Specify a column of data in the table as the source of vertex VID in the NebulaGraph. # The value of vertex must be the same as that in the JSON file. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. vertex: { field:id } - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -269,13 +269,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` edges: [ # Set the information about the Edge Type follow. { - # Specify the Edge Type name defined in Nebula Graph. + # Specify the Edge Type name defined in NebulaGraph. name: follow type: { # Specify the data source file format to JSON. source: json - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -284,17 +284,17 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.json". path: "hdfs://192.168.*.*:9000/data/edge_follow.json" - # Specify the key name in the JSON file in fields, and its corresponding value will serve as the data source for the properties specified in the Nebula Graph. + # Specify the key name in the JSON file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph. # If multiple column names need to be specified, separate them by commas. fields: [degree] - # Specify the column names in the edge table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the edge table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [degree] # Specify a column as the source for the source and destination vertexes. # The value of vertex must be the same as that in the JSON file. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. source: { field: src } @@ -306,7 +306,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # (Optional) Specify a column as the source of the rank. #ranking: rank - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -315,13 +315,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # Set the information about the Edge Type serve. { - # Specify the Edge type name defined in Nebula Graph. + # Specify the Edge type name defined in NebulaGraph. name: serve type: { # Specify the data source file format to JSON. source: json - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -330,17 +330,17 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.json". path: "hdfs://192.168.*.*:9000/data/edge_serve.json" - # Specify the key name in the JSON file in fields, and its corresponding value will serve as the data source for the properties specified in the Nebula Graph. + # Specify the key name in the JSON file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph. # If multiple column names need to be specified, separate them by commas. fields: [start_year,end_year] - # Specify the column names in the edge table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the edge table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [start_year, end_year] # Specify a column as the source for the source and destination vertexes. # The value of vertex must be the same as that in the JSON file. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. source: { field: src } @@ -351,7 +351,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # (Optional) Specify a column as the source of the rank. #ranking: _c5 - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -363,9 +363,9 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } ``` -### Step 4: Import data into Nebula Graph +### Step 4: Import data into NebulaGraph -Run the following command to import JSON data into Nebula Graph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). +Run the following command to import JSON data into NebulaGraph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). ```bash ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange -c @@ -385,7 +385,7 @@ You can search for `batchSuccess.` in the command output to ### Step 5: (optional) Validate data -Users can verify that data has been imported by executing a query in the Nebula Graph client (for example, Nebula Studio). For example: +Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, Nebula Studio). For example: ```ngql GO FROM "player100" OVER follow; @@ -393,6 +393,6 @@ GO FROM "player100" OVER follow; Users can also run the [`SHOW STATS`](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics. -### Step 6: (optional) Rebuild indexes in Nebula Graph +### Step 6: (optional) Rebuild indexes in NebulaGraph -With the data imported, users can recreate and rebuild indexes in Nebula Graph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). +With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-kafka.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-kafka.md index ef9628dc518..b7be9c78cd3 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-kafka.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-kafka.md @@ -1,6 +1,6 @@ # Import data from Kafka -This topic provides a simple guide to importing Data stored on Kafka into Nebula Graph using Exchange. +This topic provides a simple guide to importing Data stored on Kafka into NebulaGraph using Exchange. ## Environment @@ -12,33 +12,33 @@ This example is done on MacOS. Here is the environment configuration information - Spark: 2.4.7, stand-alone -- Nebula Graph: {{nebula.release}}. [Deploy Nebula Graph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). +- NebulaGraph: {{nebula.release}}. [Deploy NebulaGraph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). ## Prerequisites Before importing data, you need to confirm the following information: -- Nebula Graph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: +- NebulaGraph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: - IP addresses and ports of Graph and Meta services. - - The user name and password with write permission to Nebula Graph. + - The user name and password with write permission to NebulaGraph. - Exchange has been [compiled](../ex-ug-compile.md), or [download](https://repo1.maven.org/maven2/com/vesoft/nebula-exchange/) the compiled `.jar` file directly. - Spark has been installed. -- Learn about the Schema created in Nebula Graph, including names and properties of Tags and Edge types, and more. +- Learn about the Schema created in NebulaGraph, including names and properties of Tags and Edge types, and more. - The Kafka service has been installed and started. ## Steps -### Step 1: Create the Schema in Nebula Graph +### Step 1: Create the Schema in NebulaGraph -Analyze the data to create a Schema in Nebula Graph by following these steps: +Analyze the data to create a Schema in NebulaGraph by following these steps: -1. Identify the Schema elements. The Schema elements in the Nebula Graph are shown in the following table. +1. Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table. | Element | Name | Property | | :--- | :--- | :--- | @@ -47,7 +47,7 @@ Analyze the data to create a Schema in Nebula Graph by following these steps: | Edge Type | `follow` | `degree int` | | Edge Type | `serve` | `start_year int, end_year int` | -2. Create a graph space **basketballplayer** in the Nebula Graph and create a Schema as shown below. +2. Create a graph space **basketballplayer** in the NebulaGraph and create a Schema as shown below. ```ngql ## Create a graph space. @@ -99,7 +99,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } - # Nebula Graph configuration + # NebulaGraph configuration nebula: { address:{ # Specify the IP addresses and ports for Graph and all Meta services. @@ -108,10 +108,10 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` graph:["127.0.0.1:9669"] meta:["127.0.0.1:9559"] } - # The account entered must have write permission for the Nebula Graph space. + # The account entered must have write permission for the NebulaGraph space. user: root pswd: nebula - # Fill in the name of the graph space you want to write data to in the Nebula Graph. + # Fill in the name of the graph space you want to write data to in the NebulaGraph. space: basketballplayer connection: { timeout: 3000 @@ -134,12 +134,12 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # Set the information about the Tag player. { - # The corresponding Tag name in Nebula Graph. + # The corresponding Tag name in NebulaGraph. name: player type: { # Specify the data source file format to Kafka. source: kafka - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } # Kafka server address. @@ -153,14 +153,14 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` fields: [key,value] nebula.fields: [name,age] - # Specify a column of data in the table as the source of vertex VID in the Nebula Graph. + # Specify a column of data in the table as the source of vertex VID in the NebulaGraph. # The key is the same as the value above, indicating that key is used as both VID and property name. vertex:{ field:key } - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 10 # The number of Spark partitions. @@ -193,15 +193,15 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` edges: [ # Set the information about the Edge Type follow. { - # The corresponding Edge Type name in Nebula Graph. + # The corresponding Edge Type name in NebulaGraph. name: follow type: { # Specify the data source file format to Kafka. source: kafka - # Specify how to import the Edge type data into Nebula Graph. - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the Edge type data into NebulaGraph. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -230,7 +230,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # (Optional) Specify a column as the source of the rank. #ranking: rank - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 10 # The number of Spark partitions. @@ -271,9 +271,9 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } ``` -### Step 3: Import data into Nebula Graph +### Step 3: Import data into NebulaGraph -Run the following command to import Kafka data into Nebula Graph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). +Run the following command to import Kafka data into NebulaGraph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). ```bash ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange -c @@ -293,7 +293,7 @@ You can search for `batchSuccess.` in the command output to ### Step 4: (optional) Validate data -Users can verify that data has been imported by executing a query in the Nebula Graph client (for example, Nebula Studio). For example: +Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, Nebula Studio). For example: ```ngql GO FROM "player100" OVER follow; @@ -301,6 +301,6 @@ GO FROM "player100" OVER follow; Users can also run the [SHOW STATS](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics. -### Step 5: (optional) Rebuild indexes in Nebula Graph +### Step 5: (optional) Rebuild indexes in NebulaGraph -With the data imported, users can recreate and rebuild indexes in Nebula Graph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). +With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-maxcompute.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-maxcompute.md index bb2228bf036..3f9c6d06ec9 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-maxcompute.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-maxcompute.md @@ -1,6 +1,6 @@ # Import data from MaxCompute -This topic provides an example of how to use Exchange to import Nebula Graph data stored in MaxCompute. +This topic provides an example of how to use Exchange to import NebulaGraph data stored in MaxCompute. ## Data set @@ -20,33 +20,33 @@ This example is done on MacOS. Here is the environment configuration information - MaxCompute: Alibaba Cloud official version -- Nebula Graph: {{nebula.release}}. [Deploy Nebula Graph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). +- NebulaGraph: {{nebula.release}}. [Deploy NebulaGraph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). ## Prerequisites Before importing data, you need to confirm the following information: -- Nebula Graph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: +- NebulaGraph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: - IP addresses and ports of Graph and Meta services. - - The user name and password with write permission to Nebula Graph. + - The user name and password with write permission to NebulaGraph. - Exchange has been [compiled](../ex-ug-compile.md), or [download](https://repo1.maven.org/maven2/com/vesoft/nebula-exchange/) the compiled `.jar` file directly. - Spark has been installed. -- Learn about the Schema created in Nebula Graph, including names and properties of Tags and Edge types, and more. +- Learn about the Schema created in NebulaGraph, including names and properties of Tags and Edge types, and more. - The Hadoop service has been installed and started. ## Steps -### Step 1: Create the Schema in Nebula Graph +### Step 1: Create the Schema in NebulaGraph -Analyze the data to create a Schema in Nebula Graph by following these steps: +Analyze the data to create a Schema in NebulaGraph by following these steps: -1. Identify the Schema elements. The Schema elements in the Nebula Graph are shown in the following table. +1. Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table. | Element | Name | Property | | :--- | :--- | :--- | @@ -55,7 +55,7 @@ Analyze the data to create a Schema in Nebula Graph by following these steps: | Edge Type | `follow` | `degree int` | | Edge Type | `serve` | `start_year int, end_year int` | -2. Create a graph space **basketballplayer** in the Nebula Graph and create a Schema as shown below. +2. Create a graph space **basketballplayer** in the NebulaGraph and create a Schema as shown below. ```ngql ## Create a graph space. @@ -102,7 +102,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } } - # Nebula Graph configuration + # NebulaGraph configuration nebula: { address:{ # Specify the IP addresses and ports for Graph and Meta services. @@ -111,10 +111,10 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` graph:["127.0.0.1:9669"] meta:["127.0.0.1:9559"] } - # The account entered must have write permission for the Nebula Graph space. + # The account entered must have write permission for the NebulaGraph space. user: root pswd: nebula - # Fill in the name of the graph space you want to write data to in the Nebula Graph. + # Fill in the name of the graph space you want to write data to in the NebulaGraph. space: basketballplayer connection: { timeout: 3000 @@ -140,7 +140,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` type: { # Specify the data source file format to MaxCompute. source: maxcompute - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -165,18 +165,18 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # Ensure that the table name in the SQL statement is the same as the value of the table above. This configuration is optional. sentence:"select id, name, age, playerid from player where id < 10" - # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. # If multiple column names need to be specified, separate them by commas. fields:[name, age] nebula.fields:[name, age] - # Specify a column of data in the table as the source of vertex VID in the Nebula Graph. + # Specify a column of data in the table as the source of vertex VID in the NebulaGraph. vertex:{ field: playerid } - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -212,15 +212,15 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` edges: [ # Set the information about the Edge Type follow. { - # The corresponding Edge Type name in Nebula Graph. + # The corresponding Edge Type name in NebulaGraph. name: follow type:{ # Specify the data source file format to MaxCompute. source:maxcompute - # Specify how to import the Edge type data into Nebula Graph. - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the Edge type data into NebulaGraph. + # Specify how to import the data into NebulaGraph: Client or SST. sink:client } @@ -245,7 +245,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # Ensure that the table name in the SQL statement is the same as the value of the table above. This configuration is optional. sentence:"select * from follow" - # Specify the column names in the follow table in Fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the follow table in Fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. # If multiple column names need to be specified, separate them by commas. fields:[degree] @@ -267,7 +267,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # The number of Spark partitions. partition:10 - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch:10 } @@ -305,9 +305,9 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } ``` -### Step 3: Import data into Nebula Graph +### Step 3: Import data into NebulaGraph -Run the following command to import MaxCompute data into Nebula Graph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). +Run the following command to import MaxCompute data into NebulaGraph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). ```bash ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange -c @@ -327,7 +327,7 @@ You can search for `batchSuccess.` in the command output to ### Step 4: (optional) Validate data -Users can verify that data has been imported by executing a query in the Nebula Graph client (for example, Nebula Studio). For example: +Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, Nebula Studio). For example: ```ngql GO FROM "player100" OVER follow; @@ -335,6 +335,6 @@ GO FROM "player100" OVER follow; Users can also run the [`SHOW STATS`](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics. -### Step 5: (optional) Rebuild indexes in Nebula Graph +### Step 5: (optional) Rebuild indexes in NebulaGraph -With the data imported, users can recreate and rebuild indexes in Nebula Graph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). +With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-mysql.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-mysql.md index 7bd6a3664aa..d2e58d8e6bb 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-mysql.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-mysql.md @@ -1,7 +1,7 @@ # Import data from MySQL/PostgreSQL -This topic provides an example of how to use Exchange to export MySQL data and import to Nebula Graph. It also applies to exporting -data from PostgreSQL into Nebula Graph. +This topic provides an example of how to use Exchange to export MySQL data and import to NebulaGraph. It also applies to exporting +data from PostgreSQL into NebulaGraph. ## Data set This topic takes the [basketballplayer dataset](https://docs-cdn.nebula-graph.com.cn/dataset/dataset.zip) as an example. @@ -60,33 +60,33 @@ This example is done on MacOS. Here is the environment configuration information - MySQL: 8.0.23 -- Nebula Graph: {{nebula.release}}. [Deploy Nebula Graph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). +- NebulaGraph: {{nebula.release}}. [Deploy NebulaGraph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). ## Prerequisites Before importing data, you need to confirm the following information: -- Nebula Graph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: +- NebulaGraph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: - IP addresses and ports of Graph and Meta services. - - The user name and password with write permission to Nebula Graph. + - The user name and password with write permission to NebulaGraph. - Exchange has been [compiled](../ex-ug-compile.md), or [download](https://repo1.maven.org/maven2/com/vesoft/nebula-exchange/) the compiled `.jar` file directly. - Spark has been installed. -- Learn about the Schema created in Nebula Graph, including names and properties of Tags and Edge types, and more. +- Learn about the Schema created in NebulaGraph, including names and properties of Tags and Edge types, and more. - The Hadoop service has been installed and started. ## Steps -### Step 1: Create the Schema in Nebula Graph +### Step 1: Create the Schema in NebulaGraph -Analyze the data to create a Schema in Nebula Graph by following these steps: +Analyze the data to create a Schema in NebulaGraph by following these steps: -1. Identify the Schema elements. The Schema elements in the Nebula Graph are shown in the following table. +1. Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table. | Element | Name | Property | | :--- | :--- | :--- | @@ -95,7 +95,7 @@ Analyze the data to create a Schema in Nebula Graph by following these steps: | Edge Type | `follow` | `degree int` | | Edge Type | `serve` | `start_year int, end_year int` | -2. Create a graph space **basketballplayer** in the Nebula Graph and create a Schema as shown below. +2. Create a graph space **basketballplayer** in the NebulaGraph and create a Schema as shown below. ```ngql ## Create a graph space. @@ -142,7 +142,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } } - # Nebula Graph configuration + # NebulaGraph configuration nebula: { address:{ # Specify the IP addresses and ports for Graph and Meta services. @@ -151,10 +151,10 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` graph:["127.0.0.1:9669"] meta:["127.0.0.1:9559"] } - # The account entered must have write permission for the Nebula Graph space. + # The account entered must have write permission for the NebulaGraph space. user: root pswd: nebula - # Fill in the name of the graph space you want to write data to in the Nebula Graph. + # Fill in the name of the graph space you want to write data to in the NebulaGraph. space: basketballplayer connection: { timeout: 3000 @@ -176,12 +176,12 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` tags: [ # Set the information about the Tag player. { - # The Tag name in Nebula Graph. + # The Tag name in NebulaGraph. name: player type: { # Specify the data source file format to MySQL. source: mysql - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -193,18 +193,18 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` password:"123456" sentence:"select playerid, age, name from player order by playerid;" - # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. # If multiple column names need to be specified, separate them by commas. fields: [age,name] nebula.fields: [age,name] - # Specify a column of data in the table as the source of VIDs in the Nebula Graph. + # Specify a column of data in the table as the source of VIDs in the NebulaGraph. vertex: { field:playerid } - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -241,15 +241,15 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` edges: [ # Set the information about the Edge Type follow. { - # The corresponding Edge Type name in Nebula Graph. + # The corresponding Edge Type name in NebulaGraph. name: follow type: { # Specify the data source file format to MySQL. source: mysql - # Specify how to import the Edge type data into Nebula Graph. - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the Edge type data into NebulaGraph. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -261,7 +261,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` password:"123456" sentence:"select src_player,dst_player,degree from follow order by src_player;" - # Specify the column names in the follow table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the follow table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. # If multiple column names need to be specified, separate them by commas. fields: [degree] @@ -280,7 +280,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # (Optional) Specify a column as the source of the rank. #ranking: rank - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -317,9 +317,9 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } ``` -### Step 3: Import data into Nebula Graph +### Step 3: Import data into NebulaGraph -Run the following command to import MySQL data into Nebula Graph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). +Run the following command to import MySQL data into NebulaGraph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). ```bash ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange -c @@ -339,7 +339,7 @@ You can search for `batchSuccess.` in the command output to ### Step 4: (optional) Validate data -Users can verify that data has been imported by executing a query in the Nebula Graph client (for example, Nebula Studio). For example: +Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, Nebula Studio). For example: ```ngql GO FROM "player100" OVER follow; @@ -347,6 +347,6 @@ GO FROM "player100" OVER follow; Users can also run the [SHOW STATS](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics. -### Step 5: (optional) Rebuild indexes in Nebula Graph +### Step 5: (optional) Rebuild indexes in NebulaGraph -With the data imported, users can recreate and rebuild indexes in Nebula Graph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). +With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-neo4j.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-neo4j.md index b92f22d42c4..ff0790dc385 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-neo4j.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-neo4j.md @@ -1,6 +1,6 @@ # Import data from Neo4j -This topic provides an example of how to use Exchange to import Nebula Graph data stored in Neo4j. +This topic provides an example of how to use Exchange to import NebulaGraph data stored in Neo4j. ## Implementation method @@ -16,11 +16,11 @@ When Exchange reads Neo4j data, it needs to do the following: 4. The Reader finally processes the returned data into a DataFrame. -At this point, Exchange has finished exporting the Neo4j data. The data is then written in parallel to the Nebula Graph database. +At this point, Exchange has finished exporting the Neo4j data. The data is then written in parallel to the NebulaGraph database. The whole process is illustrated below. -![Nebula Graph® Exchange exports data from the Neo4j database and imports it into the Nebula Graph database in parallel](https://docs-cdn.nebula-graph.com.cn/figures/ex-ug-002.png "Nebula Graph® Exchange migrates Neo4j data") +![NebulaGraph® Exchange exports data from the Neo4j database and imports it into the NebulaGraph database in parallel](https://docs-cdn.nebula-graph.com.cn/figures/ex-ug-002.png "NebulaGraph® Exchange migrates Neo4j data") ## Data set @@ -41,31 +41,31 @@ This example is done on MacOS. Here is the environment configuration information - Neo4j: 3.5.20 Community Edition -- Nebula Graph: {{nebula.release}}. [Deploy Nebula Graph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). +- NebulaGraph: {{nebula.release}}. [Deploy NebulaGraph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). ## Prerequisites Before importing data, you need to confirm the following information: -- Nebula Graph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: +- NebulaGraph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: - IP addresses and ports of Graph and Meta services. - - The user name and password with Nebula Graph write permission. + - The user name and password with NebulaGraph write permission. - Exchange has been [compiled](../ex-ug-compile.md), or [download](https://repo1.maven.org/maven2/com/vesoft/nebula-exchange/) the compiled `.jar` file directly. - Spark has been installed. -- Learn about the Schema created in Nebula Graph, including names and properties of Tags and Edge types, and more. +- Learn about the Schema created in NebulaGraph, including names and properties of Tags and Edge types, and more. ## Steps -### Step 1: Create the Schema in Nebula Graph +### Step 1: Create the Schema in NebulaGraph -Analyze the data to create a Schema in Nebula Graph by following these steps: +Analyze the data to create a Schema in NebulaGraph by following these steps: -1. Identify the Schema elements. The Schema elements in the Nebula Graph are shown in the following table. +1. Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table. | Element | Name | Property | | :--- | :--- | :--- | @@ -74,7 +74,7 @@ Analyze the data to create a Schema in Nebula Graph by following these steps: | Edge Type | `follow` | `degree int` | | Edge Type | `serve` | `start_year int, end_year int` | -2. Create a graph space **basketballplayer** in the Nebula Graph and create a Schema as shown below. +2. Create a graph space **basketballplayer** in the NebulaGraph and create a Schema as shown below. ```ngql ## Create a graph space @@ -132,7 +132,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } - # Nebula Graph configuration + # NebulaGraph configuration nebula: { address:{ graph:["127.0.0.1:9669"] @@ -278,15 +278,15 @@ Exchange needs to execute different `SKIP` and `LIMIT` Cypher statements on diff #### tags.vertex or edges.vertex configuration -Nebula Graph uses ID as the unique primary key when creating vertexes and edges, overwriting the data in that primary key if it already exists. So, if a Neo4j property value is given as the Nebula Graph'S ID and the value is duplicated in Neo4j, duplicate IDs will be generated. One and only one of their corresponding data will be stored in the Nebula Graph, and the others will be overwritten. Because the data import process is concurrently writing data to Nebula Graph, the final saved data is not guaranteed to be the latest data in Neo4j. +NebulaGraph uses ID as the unique primary key when creating vertexes and edges, overwriting the data in that primary key if it already exists. So, if a Neo4j property value is given as the NebulaGraph'S ID and the value is duplicated in Neo4j, duplicate IDs will be generated. One and only one of their corresponding data will be stored in the NebulaGraph, and the others will be overwritten. Because the data import process is concurrently writing data to NebulaGraph, the final saved data is not guaranteed to be the latest data in Neo4j. #### check_point_path configuration If breakpoint transfers are enabled, to avoid data loss, the state of the database should not change between the breakpoint and the transfer. For example, data cannot be added or deleted, and the `partition` quantity configuration should not be changed. -### Step 4: Import data into Nebula Graph +### Step 4: Import data into NebulaGraph -Run the following command to import Neo4j data into Nebula Graph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). +Run the following command to import Neo4j data into NebulaGraph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). ```bash ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange -c @@ -306,7 +306,7 @@ You can search for `batchSuccess.` in the command output to ### Step 5: (optional) Validate data -Users can verify that data has been imported by executing a query in the Nebula Graph client (for example, Nebula Studio). For example: +Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, Nebula Studio). For example: ```ngql GO FROM "player100" OVER follow; @@ -314,6 +314,6 @@ GO FROM "player100" OVER follow; Users can also run the [`SHOW STATS`](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics. -### Step 6: (optional) Rebuild indexes in Nebula Graph +### Step 6: (optional) Rebuild indexes in NebulaGraph -With the data imported, users can recreate and rebuild indexes in Nebula Graph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). +With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-orc.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-orc.md index 604e9db63f7..641086bd411 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-orc.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-orc.md @@ -1,8 +1,8 @@ # Import data from ORC files -This topic provides an example of how to use Exchange to import Nebula Graph data stored in HDFS or local ORC files. +This topic provides an example of how to use Exchange to import NebulaGraph data stored in HDFS or local ORC files. -To import a local ORC file to Nebula Graph, see [Nebula Importer](https://github.com/vesoft-inc/nebula-importer "Click to go to GitHub"). +To import a local ORC file to NebulaGraph, see [Nebula Importer](https://github.com/vesoft-inc/nebula-importer "Click to go to GitHub"). ## Data set @@ -20,35 +20,35 @@ This example is done on MacOS. Here is the environment configuration information - Hadoop: 2.9.2, pseudo-distributed deployment -- Nebula Graph: {{nebula.release}}. [Deploy Nebula Graph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). +- NebulaGraph: {{nebula.release}}. [Deploy NebulaGraph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). ## Prerequisites Before importing data, you need to confirm the following information: -- Nebula Graph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: +- NebulaGraph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: - IP addresses and ports of Graph and Meta services. - - The user name and password with write permission to Nebula Graph. + - The user name and password with write permission to NebulaGraph. - Exchange has been [compiled](../ex-ug-compile.md), or [download](https://repo1.maven.org/maven2/com/vesoft/nebula-exchange/) the compiled `.jar` file directly. - Spark has been installed. -- Learn about the Schema created in Nebula Graph, including names and properties of Tags and Edge types, and more. +- Learn about the Schema created in NebulaGraph, including names and properties of Tags and Edge types, and more. - If files are stored in HDFS, ensure that the Hadoop service is running properly. -- If files are stored locally and Nebula Graph is a cluster architecture, you need to place the files in the same directory locally on each machine in the cluster. +- If files are stored locally and NebulaGraph is a cluster architecture, you need to place the files in the same directory locally on each machine in the cluster. ## Steps -### Step 1: Create the Schema in Nebula Graph +### Step 1: Create the Schema in NebulaGraph -Analyze the data to create a Schema in Nebula Graph by following these steps: +Analyze the data to create a Schema in NebulaGraph by following these steps: -1. Identify the Schema elements. The Schema elements in the Nebula Graph are shown in the following table. +1. Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table. | Element | Name | Property | | :--- | :--- | :--- | @@ -57,7 +57,7 @@ Analyze the data to create a Schema in Nebula Graph by following these steps: | Edge Type | `follow` | `degree int` | | Edge Type | `serve` | `start_year int, end_year int` | -2. Create a graph space **basketballplayer** in the Nebula Graph and create a Schema as shown below. +2. Create a graph space **basketballplayer** in the NebulaGraph and create a Schema as shown below. ```ngql ## Create a graph space. @@ -116,7 +116,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } } - # Nebula Graph configuration + # NebulaGraph configuration nebula: { address:{ # Specify the IP addresses and ports for Graph and all Meta services. @@ -126,11 +126,11 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` meta:["127.0.0.1:9559"] } - # The account entered must have write permission for the Nebula Graph space. + # The account entered must have write permission for the NebulaGraph space. user: root pswd: nebula - # Fill in the name of the graph space you want to write data to in the Nebula Graph. + # Fill in the name of the graph space you want to write data to in the NebulaGraph. space: basketballplayer connection: { timeout: 3000 @@ -158,7 +158,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # Specify the data source file format to ORC. source: orc - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -167,22 +167,22 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.orc". path: "hdfs://192.168.*.*:9000/data/vertex_player.orc" - # Specify the key name in the ORC file in fields, and its corresponding value will serve as the data source for the properties specified in the Nebula Graph. + # Specify the key name in the ORC file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph. # If multiple values need to be specified, separate them with commas. fields: [age,name] - # Specify the property names defined in Nebula Graph. + # Specify the property names defined in NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [age, name] - # Specify a column of data in the table as the source of VIDs in the Nebula Graph. + # Specify a column of data in the table as the source of VIDs in the NebulaGraph. # The value of vertex must be consistent with the field in the ORC file. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. vertex: { field:id } - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -191,13 +191,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # Set the information about the Tag team. { - # Specify the Tag name defined in Nebula Graph. + # Specify the Tag name defined in NebulaGraph. name: team type: { # Specify the data source file format to ORC. source: orc - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -206,23 +206,23 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.orc". path: "hdfs://192.168.*.*:9000/data/vertex_team.orc" - # Specify the key name in the ORC file in fields, and its corresponding value will serve as the data source for the properties specified in the Nebula Graph. + # Specify the key name in the ORC file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph. # If multiple values need to be specified, separate them with commas. fields: [name] - # Specify the property names defined in Nebula Graph. + # Specify the property names defined in NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [name] - # Specify a column of data in the table as the source of VIDs in the Nebula Graph. + # Specify a column of data in the table as the source of VIDs in the NebulaGraph. # The value of vertex must be consistent with the field in the ORC file. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. vertex: { field:id } - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -237,13 +237,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` edges: [ # Set the information about the Edge Type follow. { - # Specify the Edge Type name defined in Nebula Graph. + # Specify the Edge Type name defined in NebulaGraph. name: follow type: { # Specify the data source file format to ORC. source: orc - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -252,17 +252,17 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.orc". path: "hdfs://192.168.*.*:9000/data/edge_follow.orc" - # Specify the key name in the ORC file in fields, and its corresponding value will serve as the data source for the properties specified in the Nebula Graph. + # Specify the key name in the ORC file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph. # If multiple values need to be specified, separate them with commas. fields: [degree] - # Specify the property names defined in Nebula Graph. + # Specify the property names defined in NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [degree] # Specify a column as the source for the source and destination vertexes. # The value of vertex must be consistent with the field in the ORC file. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. source: { field: src } @@ -273,7 +273,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # (Optional) Specify a column as the source of the rank. #ranking: rank - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -282,13 +282,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # Set the information about the Edge type serve. { - # Specify the Edge type name defined in Nebula Graph. + # Specify the Edge type name defined in NebulaGraph. name: serve type: { # Specify the data source file format to ORC. source: orc - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -297,17 +297,17 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.orc". path: "hdfs://192.168.*.*:9000/data/edge_serve.orc" - # Specify the key name in the ORC file in fields, and its corresponding value will serve as the data source for the properties specified in the Nebula Graph. + # Specify the key name in the ORC file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph. # If multiple values need to be specified, separate them with commas. fields: [start_year,end_year] - # Specify the property names defined in Nebula Graph. + # Specify the property names defined in NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [start_year, end_year] # Specify a column as the source for the source and destination vertexes. # The value of vertex must be consistent with the field in the ORC file. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. source: { field: src } @@ -318,7 +318,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # (Optional) Specify a column as the source of the rank. #ranking: _c5 - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -329,9 +329,9 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } ``` -### Step 4: Import data into Nebula Graph +### Step 4: Import data into NebulaGraph -Run the following command to import ORC data into Nebula Graph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). +Run the following command to import ORC data into NebulaGraph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). ```bash ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange -c @@ -351,7 +351,7 @@ You can search for `batchSuccess.` in the command output to ### Step 5: (optional) Validate data -Users can verify that data has been imported by executing a query in the Nebula Graph client (for example, Nebula Studio). For example: +Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, Nebula Studio). For example: ```ngql GO FROM "player100" OVER follow; @@ -359,6 +359,6 @@ GO FROM "player100" OVER follow; Users can also run the [`SHOW STATS`](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics. -### Step 6: (optional) Rebuild indexes in Nebula Graph +### Step 6: (optional) Rebuild indexes in NebulaGraph -With the data imported, users can recreate and rebuild indexes in Nebula Graph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). +With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-parquet.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-parquet.md index ee08ac9bb0b..07ae2b31d98 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-parquet.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-parquet.md @@ -1,8 +1,8 @@ # Import data from Parquet files -This topic provides an example of how to use Exchange to import Nebula Graph data stored in HDFS or local Parquet files. +This topic provides an example of how to use Exchange to import NebulaGraph data stored in HDFS or local Parquet files. -To import a local Parquet file to Nebula Graph, see [Nebula Importer](https://github.com/vesoft-inc/nebula-importer "Click to go to GitHub"). +To import a local Parquet file to NebulaGraph, see [Nebula Importer](https://github.com/vesoft-inc/nebula-importer "Click to go to GitHub"). ## Data set @@ -20,35 +20,35 @@ This example is done on MacOS. Here is the environment configuration information - Hadoop: 2.9.2, pseudo-distributed deployment -- Nebula Graph: {{nebula.release}}. [Deploy Nebula Graph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). +- NebulaGraph: {{nebula.release}}. [Deploy NebulaGraph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). ## Prerequisites Before importing data, you need to confirm the following information: -- Nebula Graph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: +- NebulaGraph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: - IP addresses and ports of Graph and Meta services. - - The user name and password with write permission to Nebula Graph. + - The user name and password with write permission to NebulaGraph. - Exchange has been [compiled](../ex-ug-compile.md), or [download](https://repo1.maven.org/maven2/com/vesoft/nebula-exchange/) the compiled `.jar` file directly. - Spark has been installed. -- Learn about the Schema created in Nebula Graph, including names and properties of Tags and Edge types, and more. +- Learn about the Schema created in NebulaGraph, including names and properties of Tags and Edge types, and more. - If files are stored in HDFS, ensure that the Hadoop service is running properly. -- If files are stored locally and Nebula Graph is a cluster architecture, you need to place the files in the same directory locally on each machine in the cluster. +- If files are stored locally and NebulaGraph is a cluster architecture, you need to place the files in the same directory locally on each machine in the cluster. ## Steps -### Step 1: Create the Schema in Nebula Graph +### Step 1: Create the Schema in NebulaGraph -Analyze the data to create a Schema in Nebula Graph by following these steps: +Analyze the data to create a Schema in NebulaGraph by following these steps: -1. Identify the Schema elements. The Schema elements in the Nebula Graph are shown in the following table. +1. Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table. | Element | Name | Property | | :--- | :--- | :--- | @@ -57,7 +57,7 @@ Analyze the data to create a Schema in Nebula Graph by following these steps: | Edge Type | `follow` | `degree int` | | Edge Type | `serve` | `start_year int, end_year int` | -2. Create a graph space **basketballplayer** in the Nebula Graph and create a Schema as shown below. +2. Create a graph space **basketballplayer** in the NebulaGraph and create a Schema as shown below. ```ngql ## Create a graph space. @@ -116,7 +116,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } } - # Nebula Graph configuration + # NebulaGraph configuration nebula: { address:{ # Specify the IP addresses and ports for Graph and all Meta services. @@ -126,11 +126,11 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` meta:["127.0.0.1:9559"] } - # The account entered must have write permission for the Nebula Graph space. + # The account entered must have write permission for the NebulaGraph space. user: root pswd: nebula - # Fill in the name of the graph space you want to write data to in the Nebula Graph. + # Fill in the name of the graph space you want to write data to in the NebulaGraph. space: basketballplayer connection: { timeout: 3000 @@ -153,13 +153,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` tags: [ # Set the information about the Tag player. { - # Specify the Tag name defined in Nebula Graph. + # Specify the Tag name defined in NebulaGraph. name: player type: { # Specify the data source file format to Parquet. source: parquet - # Specifies how to import the data into Nebula Graph: Client or SST. + # Specifies how to import the data into NebulaGraph: Client or SST. sink: client } @@ -168,22 +168,22 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.parquet". path: "hdfs://192.168.*.13:9000/data/vertex_player.parquet" - # Specify the key name in the Parquet file in fields, and its corresponding value will serve as the data source for the properties specified in the Nebula Graph. + # Specify the key name in the Parquet file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph. # If multiple values need to be specified, separate them with commas. fields: [age,name] - # Specify the property name defined in Nebula Graph. + # Specify the property name defined in NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [age, name] - # Specify a column of data in the table as the source of VIDs in the Nebula Graph. + # Specify a column of data in the table as the source of VIDs in the NebulaGraph. # The value of vertex must be consistent with the field in the Parquet file. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. vertex: { field:id } - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -192,13 +192,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # Set the information about the Tag team. { - # Specify the Tag name defined in Nebula Graph. + # Specify the Tag name defined in NebulaGraph. name: team type: { # Specify the data source file format to Parquet. source: parquet - # Specifies how to import the data into Nebula Graph: Client or SST. + # Specifies how to import the data into NebulaGraph: Client or SST. sink: client } @@ -207,23 +207,23 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.parquet". path: "hdfs://192.168.11.13:9000/data/vertex_team.parquet" - # Specify the key name in the Parquet file in fields, and its corresponding value will serve as the data source for the properties specified in the Nebula Graph. + # Specify the key name in the Parquet file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph. # If multiple values need to be specified, separate them with commas. fields: [name] - # Specify the property name defined in Nebula Graph. + # Specify the property name defined in NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [name] - # Specify a column of data in the table as the source of VIDs in the Nebula Graph. + # Specify a column of data in the table as the source of VIDs in the NebulaGraph. # The value of vertex must be consistent with the field in the Parquet file. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. vertex: { field:id } - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -237,13 +237,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` edges: [ # Set the information about the Edge Type follow. { - # Specify the Edge Type name defined in Nebula Graph. + # Specify the Edge Type name defined in NebulaGraph. name: follow type: { # Specify the data source file format to Parquet. source: parquet - # Specifies how to import the data into Nebula Graph: Client or SST. + # Specifies how to import the data into NebulaGraph: Client or SST. sink: client } @@ -252,17 +252,17 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.parquet". path: "hdfs://192.168.11.13:9000/data/edge_follow.parquet" - # Specify the key name in the Parquet file in fields, and its corresponding value will serve as the data source for the properties specified in the Nebula Graph. + # Specify the key name in the Parquet file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph. # If multiple values need to be specified, separate them with commas. fields: [degree] - # Specify the property name defined in Nebula Graph. + # Specify the property name defined in NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [degree] # Specify a column as the source for the source and destination vertexes. # The values of vertex must be consistent with the fields in the Parquet file. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. source: { field: src } @@ -273,7 +273,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # (Optional) Specify a column as the source of the rank. #ranking: rank - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -282,13 +282,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # Set the information about the Edge type serve. { - # Specify the Edge type name defined in Nebula Graph. + # Specify the Edge type name defined in NebulaGraph. name: serve type: { # Specify the data source file format to Parquet. source: parquet - # Specifies how to import the data into Nebula Graph: Client or SST. + # Specifies how to import the data into NebulaGraph: Client or SST. sink: client } @@ -297,17 +297,17 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.parquet". path: "hdfs://192.168.11.13:9000/data/edge_serve.parquet" - # Specify the key name in the Parquet file in fields, and its corresponding value will serve as the data source for the properties specified in the Nebula Graph. + # Specify the key name in the Parquet file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph. # If multiple values need to be specified, separate them with commas. fields: [start_year,end_year] - # Specify the property name defined in Nebula Graph. + # Specify the property name defined in NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [start_year, end_year] # Specify a column as the source for the source and destination vertexes. # The values of vertex must be consistent with the fields in the Parquet file. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. source: { field: src } @@ -318,7 +318,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # (Optional) Specify a column as the source of the rank. #ranking: _c5 - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. @@ -330,9 +330,9 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } ``` -### Step 4: Import data into Nebula Graph +### Step 4: Import data into NebulaGraph -Run the following command to import Parquet data into Nebula Graph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). +Run the following command to import Parquet data into NebulaGraph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). ```bash ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange -c @@ -352,7 +352,7 @@ You can search for `batchSuccess.` in the command output to ### Step 5: (optional) Validate data -Users can verify that data has been imported by executing a query in the Nebula Graph client (for example, Nebula Studio). For example: +Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, Nebula Studio). For example: ```ngql GO FROM "player100" OVER follow; @@ -360,6 +360,6 @@ GO FROM "player100" OVER follow; Users can also run the [`SHOW STATS`](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics. -### Step 6: (optional) Rebuild indexes in Nebula Graph +### Step 6: (optional) Rebuild indexes in NebulaGraph -With the data imported, users can recreate and rebuild indexes in Nebula Graph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). +With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-pulsar.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-pulsar.md index dea6f1dee0e..18e7fe3c86b 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-pulsar.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-pulsar.md @@ -1,6 +1,6 @@ # Import data from Pulsar -This topic provides an example of how to use Exchange to import Nebula Graph data stored in Pulsar. +This topic provides an example of how to use Exchange to import NebulaGraph data stored in Pulsar. ## Environment @@ -12,33 +12,33 @@ This example is done on MacOS. Here is the environment configuration information - Spark: 2.4.7, stand-alone -- Nebula Graph: {{nebula.release}}. [Deploy Nebula Graph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). +- NebulaGraph: {{nebula.release}}. [Deploy NebulaGraph with Docker Compose](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md). ## Prerequisites Before importing data, you need to confirm the following information: -- Nebula Graph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: +- NebulaGraph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: - IP addresses and ports of Graph and Meta services. - - The user name and password with write permission to Nebula Graph. + - The user name and password with write permission to NebulaGraph. - Exchange has been [compiled](../ex-ug-compile.md), or [download](https://repo1.maven.org/maven2/com/vesoft/nebula-exchange/) the compiled `.jar` file directly. - Spark has been installed. -- Learn about the Schema created in Nebula Graph, including names and properties of Tags and Edge types, and more. +- Learn about the Schema created in NebulaGraph, including names and properties of Tags and Edge types, and more. - The Pulsar service has been installed and started. ## Steps -### Step 1: Create the Schema in Nebula Graph +### Step 1: Create the Schema in NebulaGraph -Analyze the data to create a Schema in Nebula Graph by following these steps: +Analyze the data to create a Schema in NebulaGraph by following these steps: -1. Identify the Schema elements. The Schema elements in the Nebula Graph are shown in the following table. +1. Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table. | Element | Name | Property | | :--- | :--- | :--- | @@ -47,7 +47,7 @@ Analyze the data to create a Schema in Nebula Graph by following these steps: | Edge Type | `follow` | `degree int` | | Edge Type | `serve` | `start_year int, end_year int` | -2. Create a graph space **basketballplayer** in the Nebula Graph and create a Schema as shown below. +2. Create a graph space **basketballplayer** in the NebulaGraph and create a Schema as shown below. ```ngql ## Create a graph space @@ -95,7 +95,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } - # Nebula Graph configuration + # NebulaGraph configuration nebula: { address:{ # Specify the IP addresses and ports for Graph and all Meta services. @@ -105,11 +105,11 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` meta:["127.0.0.1:9559"] } - # The account entered must have write permission for the Nebula Graph space. + # The account entered must have write permission for the NebulaGraph space. user: root pswd: nebula - # Fill in the name of the graph space you want to write data to in the Nebula Graph. + # Fill in the name of the graph space you want to write data to in the NebulaGraph. space: basketballplayer connection: { timeout: 3000 @@ -131,12 +131,12 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` tags: [ # Set the information about the Tag player. { - # The corresponding Tag name in Nebula Graph. + # The corresponding Tag name in NebulaGraph. name: player type: { # Specify the data source file format to Pulsar. source: pulsar - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } # The address of the Pulsar server. @@ -148,19 +148,19 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` topics: "topic1,topic2" } - # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the player table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. # If multiple column names need to be specified, separate them by commas. fields: [age,name] nebula.fields: [age,name] - # Specify a column of data in the table as the source of VIDs in the Nebula Graph. + # Specify a column of data in the table as the source of VIDs in the NebulaGraph. vertex:{ field:playerid } - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 10 # The number of Spark partitions. @@ -196,15 +196,15 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` edges: [ # Set the information about Edge Type follow { - # The corresponding Edge Type name in Nebula Graph. + # The corresponding Edge Type name in NebulaGraph. name: follow type: { # Specify the data source file format to Pulsar. source: pulsar - # Specify how to import the Edge type data into Nebula Graph. - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the Edge type data into NebulaGraph. + # Specify how to import the data into NebulaGraph: Client or SST. sink: client } @@ -217,7 +217,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` topics: "topic1,topic2" } - # Specify the column names in the follow table in fields, and their corresponding values are specified as properties in the Nebula Graph. + # Specify the column names in the follow table in fields, and their corresponding values are specified as properties in the NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. # If multiple column names need to be specified, separate them by commas. fields: [degree] @@ -236,7 +236,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # (Optional) Specify a column as the source of the rank. #ranking: rank - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 10 # The number of Spark partitions. @@ -280,9 +280,9 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } ``` -### Step 3: Import data into Nebula Graph +### Step 3: Import data into NebulaGraph -Run the following command to import Pulsar data into Nebula Graph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). +Run the following command to import Pulsar data into NebulaGraph. For a description of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md). ```bash ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange -c @@ -302,7 +302,7 @@ You can search for `batchSuccess.` in the command output to ### Step 4: (optional) Validate data -Users can verify that data has been imported by executing a query in the Nebula Graph client (for example, Nebula Studio). For example: +Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, Nebula Studio). For example: ```ngql GO FROM "player100" OVER follow; @@ -310,6 +310,6 @@ GO FROM "player100" OVER follow; Users can also run the [`SHOW STATS`](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics. -### Step 5: (optional) Rebuild indexes in Nebula Graph +### Step 5: (optional) Rebuild indexes in NebulaGraph -With the data imported, users can recreate and rebuild indexes in Nebula Graph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). +With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-sst.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-sst.md index bcac6283226..dea0b3a2f98 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-sst.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-sst.md @@ -1,6 +1,6 @@ # Import data from SST files -This topic provides an example of how to generate the data from the data source into an SST (Sorted String Table) file and save it on HDFS, and then import it into Nebula Graph. The sample data source is a CSV file. +This topic provides an example of how to generate the data from the data source into an SST (Sorted String Table) file and save it on HDFS, and then import it into NebulaGraph. The sample data source is a CSV file. ## Precautions @@ -12,9 +12,9 @@ This topic provides an example of how to generate the data from the data source Exchange supports two data import modes: -- Import the data from the data source directly into Nebula Graph as **nGQL** statements. +- Import the data from the data source directly into NebulaGraph as **nGQL** statements. -- Generate the SST file from the data source, and use Console to import the SST file into Nebula Graph. +- Generate the SST file from the data source, and use Console to import the SST file into NebulaGraph. The following describes the scenarios, implementation methods, prerequisites, and steps for generating an SST file and importing data. @@ -30,17 +30,17 @@ The following describes the scenarios, implementation methods, prerequisites, an ## Implementation methods -The underlying code in Nebula Graph uses RocksDB as the key-value storage engine. RocksDB is a storage engine based on the hard disk, providing a series of APIs for creating and importing SST files to help quickly import massive data. +The underlying code in NebulaGraph uses RocksDB as the key-value storage engine. RocksDB is a storage engine based on the hard disk, providing a series of APIs for creating and importing SST files to help quickly import massive data. The SST file is an internal file containing an arbitrarily long set of ordered key-value pairs for efficient storage of large amounts of key-value data. The entire process of generating SST files is mainly done by Exchange Reader, sstProcessor, and sstWriter. The whole data processing steps are as follows: 1. Reader reads data from the data source. -2. sstProcessor generates the SST file from the Nebula Graph's Schema information and uploads it to the HDFS. For details about the format of the SST file, see [Data Storage Format](../../1.introduction/3.nebula-graph-architecture/4.storage-service.md). +2. sstProcessor generates the SST file from the NebulaGraph's Schema information and uploads it to the HDFS. For details about the format of the SST file, see [Data Storage Format](../../1.introduction/3.nebula-graph-architecture/4.storage-service.md). 3. sstWriter opens a file and inserts data. When generating SST files, keys must be written in sequence. -4. After the SST file is generated, RocksDB imports the SST file into Nebula Graph using the `IngestExternalFile()` method. For example: +4. After the SST file is generated, RocksDB imports the SST file into NebulaGraph using the `IngestExternalFile()` method. For example: ``` IngestExternalFileOptions ifo; @@ -71,17 +71,17 @@ This example is done on MacOS. Here is the environment configuration information - Hadoop: 2.9.2, pseudo-distributed deployment -- Nebula Graph: {{nebula.release}}. +- NebulaGraph: {{nebula.release}}. ## Prerequisites Before importing data, you need to confirm the following information: -- Nebula Graph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: +- NebulaGraph has been [installed](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) and deployed with the following information: - IP addresses and ports of Graph and Meta services. - - The user name and password with write permission to Nebula Graph. + - The user name and password with write permission to NebulaGraph. - `--ws_storage_http_port` in the Meta service configuration file is the same as `--ws_http_port` in the Storage service configuration file. For example, `19779`. @@ -107,11 +107,11 @@ Before importing data, you need to confirm the following information: ## Steps -### Step 1: Create the Schema in Nebula Graph +### Step 1: Create the Schema in NebulaGraph -Analyze the data to create a Schema in Nebula Graph by following these steps: +Analyze the data to create a Schema in NebulaGraph by following these steps: -1. Identify the Schema elements. The Schema elements in the Nebula Graph are shown in the following table. +1. Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table. | Element | Name | Property | | :--- | :--- | :--- | @@ -120,7 +120,7 @@ Analyze the data to create a Schema in Nebula Graph by following these steps: | Edge Type | `follow` | `degree int` | | Edge Type | `serve` | `start_year int, end_year int` | -2. Create a graph space **basketballplayer** in the Nebula Graph and create a Schema as shown below. +2. Create a graph space **basketballplayer** in the NebulaGraph and create a Schema as shown below. ```ngql ## Create a graph space @@ -187,7 +187,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` } } - # Nebula Graph configuration + # NebulaGraph configuration nebula: { address:{ graph:["127.0.0.1:9669"] @@ -237,13 +237,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` tags: [ # Set the information about the Tag player. { - # Specify the Tag name defined in Nebula Graph. + # Specify the Tag name defined in NebulaGraph. name: player type: { # Specify the data source file format to CSV. source: csv - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: sst } @@ -255,13 +255,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file has a header, use the actual column name. fields: [_c1, _c2] - # Specify the property name defined in Nebula Graph. + # Specify the property name defined in NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [age, name] - # Specify a column of data in the table as the source of VIDs in Nebula Graph. + # Specify a column of data in the table as the source of VIDs in NebulaGraph. # The value of vertex must be consistent with the column name in the above fields or csv.fields. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. vertex: { field:_c0 } @@ -273,25 +273,25 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file does not have a header, set the header to false. The default value is false. header: false - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. partition: 32 - # Whether to repartition data based on the number of partitions of graph spaces in Nebula Graph when generating the SST file. + # Whether to repartition data based on the number of partitions of graph spaces in NebulaGraph when generating the SST file. repartitionWithNebula: false } # Set the information about the Tag Team. { - # Specify the Tag name defined in Nebula Graph. + # Specify the Tag name defined in NebulaGraph. name: team type: { # Specify the data source file format to CSV. source: csv - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: sst } @@ -303,13 +303,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file has a header, use the actual column name. fields: [_c1] - # Specify the property name defined in Nebula Graph. + # Specify the property name defined in NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [name] - # Specify a column of data in the table as the source of VIDs in Nebula Graph. + # Specify a column of data in the table as the source of VIDs in NebulaGraph. # The value of vertex must be consistent with the column name in the above fields or csv.fields. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. vertex: { field:_c0 } @@ -321,13 +321,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file does not have a header, set the header to false. The default value is false. header: false - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. partition: 32 - # Whether to repartition data based on the number of partitions of graph spaces in Nebula Graph when generating the SST file. + # Whether to repartition data based on the number of partitions of graph spaces in NebulaGraph when generating the SST file. repartitionWithNebula: false } @@ -339,13 +339,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` edges: [ # Set the information about the Edge Type follow. { - # The Edge Type name defined in Nebula Graph. + # The Edge Type name defined in NebulaGraph. name: follow type: { # Specify the data source file format to CSV. source: csv - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: sst } @@ -357,13 +357,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file has a header, use the actual column name. fields: [_c2] - # Specify the property name defined in Nebula Graph. + # Specify the property name defined in NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [degree] # Specify a column as the source for the source and destination vertices. # The value of vertex must be consistent with the column name in the above fields or csv.fields. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. source: { field: _c0 } @@ -382,25 +382,25 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file does not have a header, set the header to false. The default value is false. header: false - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. partition: 32 - # Whether to repartition data based on the number of partitions of graph spaces in Nebula Graph when generating the SST file. + # Whether to repartition data based on the number of partitions of graph spaces in NebulaGraph when generating the SST file. repartitionWithNebula: false } # Set the information about the Edge Type serve. { - # Specify the Edge type name defined in Nebula Graph. + # Specify the Edge type name defined in NebulaGraph. name: serve type: { # Specify the data source file format to CSV. source: csv - # Specify how to import the data into Nebula Graph: Client or SST. + # Specify how to import the data into NebulaGraph: Client or SST. sink: sst } @@ -412,13 +412,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file has a header, use the actual column name. fields: [_c2,_c3] - # Specify the property name defined in Nebula Graph. + # Specify the property name defined in NebulaGraph. # The sequence of fields and nebula.fields must correspond to each other. nebula.fields: [start_year, end_year] # Specify a column as the source for the source and destination vertices. # The value of vertex must be consistent with the column name in the above fields or csv.fields. - # Currently, Nebula Graph {{nebula.release}} supports only strings or integers of VID. + # Currently, NebulaGraph {{nebula.release}} supports only strings or integers of VID. source: { field: _c0 } @@ -436,13 +436,13 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf` # If the CSV file does not have a header, set the header to false. The default value is false. header: false - # The number of data written to Nebula Graph in a single batch. + # The number of data written to NebulaGraph in a single batch. batch: 256 # The number of Spark partitions. partition: 32 - # Whether to repartition data based on the number of partitions of graph spaces in Nebula Graph when generating the SST file. + # Whether to repartition data based on the number of partitions of graph spaces in NebulaGraph when generating the SST file. repartitionWithNebula: false } @@ -491,7 +491,7 @@ After the task is complete, you can view the generated SST file in the `/sst` di - The `--ws_meta_http_port` in the Graph service configuration file (add it manually if it does not exist) is the same as the `--ws_http_port` in the Meta service configuration file. For example, both are `19559`. -Connect to the Nebula Graph database using the client tool and import the SST file as follows: +Connect to the NebulaGraph database using the client tool and import the SST file as follows: 1. Run the following command to select the graph space you created earlier. @@ -519,13 +519,13 @@ Connect to the Nebula Graph database using the client tool and import the SST fi !!! note - - To download the SST file again, delete the `download` folder in the space ID in the `data/storage/nebula` directory in the Nebula Graph installation path, and then download the SST file again. If the space has multiple copies, the `download` folder needs to be deleted on all machines where the copies are saved. + - To download the SST file again, delete the `download` folder in the space ID in the `data/storage/nebula` directory in the NebulaGraph installation path, and then download the SST file again. If the space has multiple copies, the `download` folder needs to be deleted on all machines where the copies are saved. - If there is a problem with the import and re-importing is required, re-execute `SUBMIT JOB INGEST;`. ### Step 6: (optional) Validate data -Users can verify that data has been imported by executing a query in the Nebula Graph client (for example, Nebula Studio). For example: +Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, Nebula Studio). For example: ```ngql GO FROM "player100" OVER follow; @@ -533,6 +533,6 @@ GO FROM "player100" OVER follow; Users can also run the [`SHOW STATS`](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics. -### Step 7: (optional) Rebuild indexes in Nebula Graph +### Step 7: (optional) Rebuild indexes in NebulaGraph -With the data imported, users can recreate and rebuild indexes in Nebula Graph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). +With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). diff --git a/docs-2.0/nebula-explorer/12.query-visually.md b/docs-2.0/nebula-explorer/12.query-visually.md index d2e55e71b9f..cbd5748d9e6 100644 --- a/docs-2.0/nebula-explorer/12.query-visually.md +++ b/docs-2.0/nebula-explorer/12.query-visually.md @@ -4,7 +4,7 @@ The Visual Query feature uses a visual representation to express related request !!! compatibility - The Visual Query feature is not compatible with Nebula Graph versions below 3.0.0. + The Visual Query feature is not compatible with NebulaGraph versions below 3.0.0. !!! note diff --git a/docs-2.0/nebula-explorer/about-explorer/ex-ug-what-is-explorer.md b/docs-2.0/nebula-explorer/about-explorer/ex-ug-what-is-explorer.md index 7863e5519cc..7a362cb33e6 100644 --- a/docs-2.0/nebula-explorer/about-explorer/ex-ug-what-is-explorer.md +++ b/docs-2.0/nebula-explorer/about-explorer/ex-ug-what-is-explorer.md @@ -1,6 +1,6 @@ # What is Nebula Explorer -Nebula Explorer (Explorer in short) is a browser-based visualization tool. It is used with the Nebula Graph core to visualize interaction with graph data. Even if there is no experience in graph database, you can quickly become a graph exploration expert. +Nebula Explorer (Explorer in short) is a browser-based visualization tool. It is used with the NebulaGraph core to visualize interaction with graph data. Even if there is no experience in graph database, you can quickly become a graph exploration expert. !!! enterpriseonly @@ -35,9 +35,9 @@ Explorer has these features: ## Authentication -Authentication is not enabled in Nebula Graph by default. Users can log into Studio with the `root` account and any password. +Authentication is not enabled in NebulaGraph by default. Users can log into Studio with the `root` account and any password. -When Nebula Graph enables authentication, users can only sign into Studio with the specified account. For more information, see [Authentication](../../7.data-security/1.authentication/1.authentication.md). +When NebulaGraph enables authentication, users can only sign into Studio with the specified account. For more information, see [Authentication](../../7.data-security/1.authentication/1.authentication.md). ## Video diff --git a/docs-2.0/nebula-explorer/canvas-operations/visualization-mode.md b/docs-2.0/nebula-explorer/canvas-operations/visualization-mode.md index 8c5f0351ded..9056974207a 100644 --- a/docs-2.0/nebula-explorer/canvas-operations/visualization-mode.md +++ b/docs-2.0/nebula-explorer/canvas-operations/visualization-mode.md @@ -37,4 +37,4 @@ At the top left of the page, toggle the view button to switch to 3D mode. 3D mod !!! compatibility "Legacy version compatibility" - For versions of Nebula Graph below 3.0.0, you need to create an index before using the Bird View feature. For more information, see [Create an index](../../3.ngql-guide/14.native-index-statements/1.create-native-index.md). + For versions of NebulaGraph below 3.0.0, you need to create an index before using the Bird View feature. For more information, see [Create an index](../../3.ngql-guide/14.native-index-statements/1.create-native-index.md). diff --git a/docs-2.0/nebula-explorer/db-management/11.import-data.md b/docs-2.0/nebula-explorer/db-management/11.import-data.md index 456cb6eb596..17e6832fb9a 100644 --- a/docs-2.0/nebula-explorer/db-management/11.import-data.md +++ b/docs-2.0/nebula-explorer/db-management/11.import-data.md @@ -1,6 +1,6 @@ # Import data -Explorer allows you to import data into Nebula Graph using GUI. +Explorer allows you to import data into NebulaGraph using GUI. At the upper right corner of the page, click ![download](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-download.png) to enter the data import page. diff --git a/docs-2.0/nebula-explorer/deploy-connect/ex-ug-connect.md b/docs-2.0/nebula-explorer/deploy-connect/ex-ug-connect.md index 8785b341235..b80231a714b 100644 --- a/docs-2.0/nebula-explorer/deploy-connect/ex-ug-connect.md +++ b/docs-2.0/nebula-explorer/deploy-connect/ex-ug-connect.md @@ -1,20 +1,20 @@ -# Connect to Nebula Graph +# Connect to NebulaGraph -After successfully launching Explorer, you need to configure to connect to Nebula Graph. This topic describes how Explorer connects to the Nebula Graph database. +After successfully launching Explorer, you need to configure to connect to NebulaGraph. This topic describes how Explorer connects to the NebulaGraph database. ## Prerequisites -Before connecting to the Nebula Graph database, you need to confirm the following information: +Before connecting to the NebulaGraph database, you need to confirm the following information: -- The Nebula Graph services and Explorer are started. For more information, see [Deploy Explorer](../deploy-connect/ex-ug-connect.md). +- The NebulaGraph services and Explorer are started. For more information, see [Deploy Explorer](../deploy-connect/ex-ug-connect.md). -- You have the local IP address and the port used by the Graph service of Nebula Graph. The default port is `9669`. +- You have the local IP address and the port used by the Graph service of NebulaGraph. The default port is `9669`. -- You have a Nebula Graph account and its password. +- You have a NebulaGraph account and its password. ## Procedure -To connect Explorer to Nebula Graph, follow these steps: +To connect Explorer to NebulaGraph, follow these steps: 1. On the **Config Server** page of Explorer, configure these fields: @@ -38,10 +38,10 @@ To connect Explorer to Nebula Graph, follow these steps: !!! note - One session continues for up to 30 minutes. If you do not operate Explorer within 30 minutes, the active session will time out and you must connect to Nebula Graph again. + One session continues for up to 30 minutes. If you do not operate Explorer within 30 minutes, the active session will time out and you must connect to NebulaGraph again. ## Clear connection -When Explorer is still connected to a Nebula Graph database, on the upper right corner of the page, select ![icon](https://docs-cdn.nebula-graph.com.cn/figures/nav-setup.png) > **Clear Connect**. +When Explorer is still connected to a NebulaGraph database, on the upper right corner of the page, select ![icon](https://docs-cdn.nebula-graph.com.cn/figures/nav-setup.png) > **Clear Connect**. -After that, if the **configuration database** page is displayed on the browser, it means that Explorer has successfully disconnected from the Nebula Graph. +After that, if the **configuration database** page is displayed on the browser, it means that Explorer has successfully disconnected from the NebulaGraph. diff --git a/docs-2.0/nebula-explorer/deploy-connect/ex-ug-deploy.md b/docs-2.0/nebula-explorer/deploy-connect/ex-ug-deploy.md index 70e7a961534..b5326df1dc4 100644 --- a/docs-2.0/nebula-explorer/deploy-connect/ex-ug-deploy.md +++ b/docs-2.0/nebula-explorer/deploy-connect/ex-ug-deploy.md @@ -2,13 +2,13 @@ This topic describes how to deploy Explorer locally by RPM and tar packages. -## Nebula Graph version +## NebulaGraph version !!! Note - Explorer is released separately, not synchronized with Nebula Graph. And the version naming of Explorer is different from that of Nebula Graph. The version correspondence between Nebula Graph and Explorer is as follows. + Explorer is released separately, not synchronized with NebulaGraph. And the version naming of Explorer is different from that of NebulaGraph. The version correspondence between NebulaGraph and Explorer is as follows. -| Nebula Graph version | Explorer version | +| NebulaGraph version | Explorer version | | --- | --- | | 3.1.0 ~ 3.1.0| 3.1.0| | 3.0.0 ~ 3.1.0 | 3.0.0 @@ -20,7 +20,7 @@ This topic describes how to deploy Explorer locally by RPM and tar packages. Before deploying Explorer, you must check the following information: -- The Nebula Graph services are deployed and started. For more information, see [Nebula Graph Database Manual](../../2.quick-start/1.quick-start-workflow.md). +- The NebulaGraph services are deployed and started. For more information, see [NebulaGraph Database Manual](../../2.quick-start/1.quick-start-workflow.md). - Before the installation starts, the following ports are not occupied. @@ -219,7 +219,7 @@ kill $(lsof -t -i :7002) When Explorer is started, use `http://:7002` to get access to Explorer. -The following login page shows that Explorer is successfully connected to Nebula Graph. +The following login page shows that Explorer is successfully connected to NebulaGraph. ![Nebula Explorer Login page](https://docs-cdn.nebula-graph.com.cn/figures/explorer_deploy.png) @@ -227,4 +227,4 @@ The following login page shows that Explorer is successfully connected to Nebula When logging into Nebula Explorer for the first time, the content of *END USER LICENSE AGREEMENT* is displayed on the login page. Please read it and then click **I agree**. -After entering the Explorer login interface, you need to connect to Nebula Graph. For more information, refer to [Connecting to the Nebula Graph](../deploy-connect/ex-ug-connect.md). +After entering the Explorer login interface, you need to connect to NebulaGraph. For more information, refer to [Connecting to the NebulaGraph](../deploy-connect/ex-ug-connect.md). diff --git a/docs-2.0/nebula-explorer/ex-ug-page-overview.md b/docs-2.0/nebula-explorer/ex-ug-page-overview.md index 84e90a96f50..022af5880e8 100644 --- a/docs-2.0/nebula-explorer/ex-ug-page-overview.md +++ b/docs-2.0/nebula-explorer/ex-ug-page-overview.md @@ -13,12 +13,12 @@ The Nebula Explorer page consists of three modules top navigation bar, left-side | **Explorer** | Visually explore and analyze data. For more information, see [Start querying](graph-explorer/ex-ug-query-exploration.md), [Vertex Filter](graph-explorer/node-filtering.md), [Graph exploration](graph-explorer/ex-ug-graph-exploration.md) and [Graph algorithm](graph-explorer/graph-algorithm.md). | | **Visual Query** | Visually construct scenarios for data queries. For more information, see [Visual Query](12.query-visually.md). | | **Workflow** | Visually construct custom workflows for complex graph computing. For more information, see [Workflow overview](workflow/workflows.md). | -| ![create_schema](https://docs-cdn.nebula-graph.com.cn/figures/studio-nav-schema.png) | Manage Nebula Graph database graph spaces. For more information, see [Create a schema](db-management/10.create-schema.md). | -| ![import_data](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-download.png) | Bulk import of data into Nebula Graph. For more information, see [Import data](db-management/11.import-data.md). | -| ![Console](https://docs-cdn.nebula-graph.com.cn/figures/nav-console2.png) | Query the Nebula Graph data with nGQL statements. For more information, see [Console](db-management/explorer-console.md). | +| ![create_schema](https://docs-cdn.nebula-graph.com.cn/figures/studio-nav-schema.png) | Manage NebulaGraph database graph spaces. For more information, see [Create a schema](db-management/10.create-schema.md). | +| ![import_data](https://docs-cdn.nebula-graph.com.cn/figures/studio-btn-download.png) | Bulk import of data into NebulaGraph. For more information, see [Import data](db-management/11.import-data.md). | +| ![Console](https://docs-cdn.nebula-graph.com.cn/figures/nav-console2.png) | Query the NebulaGraph data with nGQL statements. For more information, see [Console](db-management/explorer-console.md). | | ![language](https://docs-cdn.nebula-graph.com.cn/figures/navbar-language.png) | Select the language of Nebula Explorer page. Chinese and English are supported. | -| ![help](https://docs-cdn.nebula-graph.com.cn/figures/navbar-help.png) | Guide and help you in using Nebula Graph. | -| ![clear_connection](https://docs-cdn.nebula-graph.com.cn/figures/image-icon10.png) | Show the Nebula Graph version and allow you to disconnect from Nebula Explorer. | +| ![help](https://docs-cdn.nebula-graph.com.cn/figures/navbar-help.png) | Guide and help you in using NebulaGraph. | +| ![clear_connection](https://docs-cdn.nebula-graph.com.cn/figures/image-icon10.png) | Show the NebulaGraph version and allow you to disconnect from Nebula Explorer. | ## Left-side navigation bar @@ -39,7 +39,7 @@ Click the icons in the left-side navigation bar to import, analyze, and explore | ![graph-algorithm](https://docs-cdn.nebula-graph.com.cn/figures/rightclickmenu-graphCalculation.png)| Perform graph computing based on the vertexes and edges in the canvas. For more Information see [Graph computing](graph-explorer/ex-ug-graph-exploration.md). | | ![snapshot](https://docs-cdn.nebula-graph.com.cn/figures/snapshot-history.png) | View historical snapshots. For more information, see [Canvas snapshots](canvas-operations/canvas-snapshot.md). | | ![graphSpace](https://docs-cdn.nebula-graph.com.cn/figures/nav-graphSpace.png) | View all graph spaces. Click a graph space to create a canvas corresponding to it. For more information, see [Choose graph spaces](graph-explorer/13.choose-graphspace.md). | -| ![Help](https://docs-cdn.nebula-graph.com.cn/figures/nav-help.png) | View Explorer documents and Nebula Graph forum. | +| ![Help](https://docs-cdn.nebula-graph.com.cn/figures/nav-help.png) | View Explorer documents and NebulaGraph forum. | | ![Setup](https://docs-cdn.nebula-graph.com.cn/figures/nav-setup2.png) | View your account, explorer version and shortcuts, limit returned results.| ## Canvas diff --git a/docs-2.0/nebula-explorer/graph-explorer/ex-ug-query-exploration.md b/docs-2.0/nebula-explorer/graph-explorer/ex-ug-query-exploration.md index 3dd526bc774..59265c1d5a5 100644 --- a/docs-2.0/nebula-explorer/graph-explorer/ex-ug-query-exploration.md +++ b/docs-2.0/nebula-explorer/graph-explorer/ex-ug-query-exploration.md @@ -8,7 +8,7 @@ Select a target graph space before querying data. For more information, see [Cho !!! compatibility "Legacy version compatibility" - For versions of Nebula Graph below 3.0.0, you need to create an index before querying data. For more information, see [Create an index](../../3.ngql-guide/14.native-index-statements/1.create-native-index.md). + For versions of NebulaGraph below 3.0.0, you need to create an index before querying data. For more information, see [Create an index](../../3.ngql-guide/14.native-index-statements/1.create-native-index.md). ## Steps diff --git a/docs-2.0/nebula-explorer/workflow/1.prepare-resources.md b/docs-2.0/nebula-explorer/workflow/1.prepare-resources.md index 4f56edf4a85..164802c91ed 100644 --- a/docs-2.0/nebula-explorer/workflow/1.prepare-resources.md +++ b/docs-2.0/nebula-explorer/workflow/1.prepare-resources.md @@ -1,6 +1,6 @@ # Prepare resources -You must prepare your environment for running a workflow, including Nebula Graph configurations, HDFS configurations, and Nebula Analytics configurations. +You must prepare your environment for running a workflow, including NebulaGraph configurations, HDFS configurations, and Nebula Analytics configurations. ## Prerequisites @@ -16,7 +16,7 @@ Nebula Analytics {{plato.release}} or later and Dag Controller {{dag.release}} o |Type|Description| |:--|:--| - |Nebula Graph Configuration| The access address of the graph service that executes a graph query or to which the graph computing result is written. The default address is the address that you use to log into Explorer and can not be changed. You can set timeout periods for three services.| + |NebulaGraph Configuration| The access address of the graph service that executes a graph query or to which the graph computing result is written. The default address is the address that you use to log into Explorer and can not be changed. You can set timeout periods for three services.| |HDFS Configuration| The HDFS address that stores the result of the graph query or graph computing. Click **Add** to add a new address, you can set the HDFS name, HDFS path, and HDFS username (optional). |Nebula Analytics Configuration| The Nebula Analytics address that performs the graph computing. Click **Add** to add a new address.| diff --git a/docs-2.0/nebula-explorer/workflow/2.create-workflow.md b/docs-2.0/nebula-explorer/workflow/2.create-workflow.md index 7b0dfe21eff..5e750db0c07 100644 --- a/docs-2.0/nebula-explorer/workflow/2.create-workflow.md +++ b/docs-2.0/nebula-explorer/workflow/2.create-workflow.md @@ -4,7 +4,7 @@ This topic describes how to create a simple workflow. ## Prerequisites -- The data source is ready. The data source can be data in Nebula Graph or CSV files on HDFS. +- The data source is ready. The data source can be data in NebulaGraph or CSV files on HDFS. - The [resource](1.prepare-resources.md) has been configured. @@ -43,11 +43,11 @@ With the result of the MATCH statement `MATCH (v1:player)--(v2) RETURN id(v1), |Parameters|Description| |:---|:---| |PageRank|Click ![pencil](https://docs-cdn.nebula-graph.com.cn/figures/workflow-edit.png) to modify the component name to identify the component.| - |Input| Three data sources are supported as input.
**Nebula Graph**: Users must select one graph space and corresponding edge types.
**Dependence**: The system will automatically recognize the data source according to the connection of the anchor.
**HDFS**: Users must select HDFS and fill in the relative path of the data source file.| + |Input| Three data sources are supported as input.
**NebulaGraph**: Users must select one graph space and corresponding edge types.
**Dependence**: The system will automatically recognize the data source according to the connection of the anchor.
**HDFS**: Users must select HDFS and fill in the relative path of the data source file.| |Parameter settings| Set the parameters of the graph algorithm. The parameters of different algorithms are different. Some parameters can be obtained from any upstream component where the anchor are shown in yellow.| |Output| Display the column name of the graph computing results. The name can not be modified.| |Execution settings| **Machine num**: The number of machines executing the algorithm.
**Processes**: The total number of processes executing the algorithm. Allocate these processes equally to each machine based on the number of machines.
**Threads**: How many threads are started per process.| - |Results| Set the restoration path of the results in HDFS or Nebula Graph.
**HDFS**: The save path is automatically generated based on the job and task ID.
**Nebula Graph**: Tags need to be created beforehand in the corresponding graph space to store the results. For more information about the properties of the tag, see [Algorithm overview](../../graph-computing/algorithm-description.md).
Some algorithms can only be saved in the HDFS.| + |Results| Set the restoration path of the results in HDFS or NebulaGraph.
**HDFS**: The save path is automatically generated based on the job and task ID.
**NebulaGraph**: Tags need to be created beforehand in the corresponding graph space to store the results. For more information about the properties of the tag, see [Algorithm overview](../../graph-computing/algorithm-description.md).
Some algorithms can only be saved in the HDFS.| 6. Click ![pencil](https://docs-cdn.nebula-graph.com.cn/figures/workflow-edit.png) next to the automatically generated workflow name at the upper left corner of the canvas page to modify the workflow name, and click **Run** at the upper right corner of the canvas page. The job page is automatically displayed to show the job progress. You can view the result after the job is completed. For details, see [Job management](4.jobs-management.md). diff --git a/docs-2.0/nebula-explorer/workflow/workflow-api/workflow-api-overview.md b/docs-2.0/nebula-explorer/workflow/workflow-api/workflow-api-overview.md index 8c88c657c3e..4de30470efe 100644 --- a/docs-2.0/nebula-explorer/workflow/workflow-api/workflow-api-overview.md +++ b/docs-2.0/nebula-explorer/workflow/workflow-api/workflow-api-overview.md @@ -39,9 +39,9 @@ Token information verification is required when calling an API. Run the followin curl -i -X POST -H "Content-Type: application/json" -H "Authorization: Bearer " -d '{"address":"","port":}' http://:/api-open/v1/connect ``` -- ``: The Base64 encoded Nebula Graph account and password. Before the encoding, the format is `:`, for example, `root:123`. After the encoding, the result is `cm9vdDoxMjM=`. -- ``: The access address of the Nebula Graph. -- ``: The access port of the Nebula Graph. +- ``: The Base64 encoded NebulaGraph account and password. Before the encoding, the format is `:`, for example, `root:123`. After the encoding, the result is `cm9vdDoxMjM=`. +- ``: The access address of the NebulaGraph. +- ``: The access port of the NebulaGraph. - ``: The access address of the Nebula Explorer. - ``: The access port of the Nebula Explorer. diff --git a/docs-2.0/nebula-explorer/workflow/workflows.md b/docs-2.0/nebula-explorer/workflow/workflows.md index 88d47fe084a..eea7ffd9319 100644 --- a/docs-2.0/nebula-explorer/workflow/workflows.md +++ b/docs-2.0/nebula-explorer/workflow/workflows.md @@ -25,12 +25,12 @@ Instantiate the workflow when performing graph computing. The instantiated compo - The results of a graph query component can only be stored in the HDFS, which is convenient to be called by multiple algorithms. -- The input to the graph computing component can be the specified data in the Nebula Graph or HDFS, or can depend on the results of the graph query component. +- The input to the graph computing component can be the specified data in the NebulaGraph or HDFS, or can depend on the results of the graph query component. If an input depends on the results of the previous graph query component, the graph computing component must be fully connected to the graph query component, that is, the white output anchors of the previous graph query component are all connected to the white input anchors of the graph compute component. - The parameters of some algorithms can also depend on the upstream components. -- The result of the graph computing components can be stored in the Nebula Graph or HDFS, but not all algorithm results are suitable to be stored in Nebula Graph. Some algorithms can only be saved in HDFS when configuring the save results page. +- The result of the graph computing components can be stored in the NebulaGraph or HDFS, but not all algorithm results are suitable to be stored in NebulaGraph. Some algorithms can only be saved in HDFS when configuring the save results page. ## Algorithm description diff --git a/docs-2.0/nebula-flink-connector.md b/docs-2.0/nebula-flink-connector.md index 4550138acef..d741170e718 100644 --- a/docs-2.0/nebula-flink-connector.md +++ b/docs-2.0/nebula-flink-connector.md @@ -1,7 +1,7 @@ # Nebula Flink Connector -Nebula Flink Connector is a connector that helps Flink users quickly access Nebula Graph. Nebula Flink Connector supports reading data from the Nebula Graph database or writing other external data to the Nebula Graph database. +Nebula Flink Connector is a connector that helps Flink users quickly access NebulaGraph. Nebula Flink Connector supports reading data from the NebulaGraph database or writing other external data to the NebulaGraph database. For more information, see [Nebula Flink Connector](https://github.com/vesoft-inc/nebula-flink-connector). @@ -9,11 +9,11 @@ For more information, see [Nebula Flink Connector](https://github.com/vesoft-inc Nebula Flink Connector applies to the following scenarios: -* Migrate data between different Nebula Graph clusters. +* Migrate data between different NebulaGraph clusters. -* Migrate data between different graph spaces in the same Nebula Graph cluster. +* Migrate data between different graph spaces in the same NebulaGraph cluster. -* Migrate data between Nebula Graph and other data sources. +* Migrate data between NebulaGraph and other data sources. ## Release note diff --git a/docs-2.0/nebula-importer/config-with-header.md b/docs-2.0/nebula-importer/config-with-header.md index c1d4f59386d..bb69ffaba13 100644 --- a/docs-2.0/nebula-importer/config-with-header.md +++ b/docs-2.0/nebula-importer/config-with-header.md @@ -75,7 +75,7 @@ Such as `student.name:string`, `follow.degree:double`. ## Sample configuration ```yaml -# Connected to the Nebula Graph version, set to v3 when connected to 3.x. +# Connected to the NebulaGraph version, set to v3 when connected to 3.x. version: v3 description: example @@ -88,13 +88,13 @@ clientSettings: # Retry times of nGQL statement execution failures. retry: 3 - # Number of Nebula Graph client concurrency. + # Number of NebulaGraph client concurrency. concurrency: 10 - # Cache queue size per Nebula Graph client. + # Cache queue size per NebulaGraph client. channelBufferSize: 128 - # Specifies the Nebula Graph space to import the data into. + # Specifies the NebulaGraph space to import the data into. space: student # Connection information. @@ -104,7 +104,7 @@ clientSettings: address: 192.168.*.13:9669 postStart: - # Configure some of the operations to perform after connecting to the Nebula Graph server, and before inserting data. + # Configure some of the operations to perform after connecting to the NebulaGraph server, and before inserting data. commands: | DROP SPACE IF EXISTS student; CREATE SPACE IF NOT EXISTS student(partition_num=5, replica_factor=1, vid_type=FIXED_STRING(20)); @@ -116,7 +116,7 @@ clientSettings: afterPeriod: 15s preStop: - # Configure some of the actions you performed before disconnecting from the Nebula Graph server. + # Configure some of the actions you performed before disconnecting from the NebulaGraph server. commands: | # Path of the error log file. diff --git a/docs-2.0/nebula-importer/config-without-header.md b/docs-2.0/nebula-importer/config-without-header.md index 0d25e3b430e..ef525204bdf 100644 --- a/docs-2.0/nebula-importer/config-without-header.md +++ b/docs-2.0/nebula-importer/config-without-header.md @@ -34,7 +34,7 @@ The following is an example of a CSV file without header: ## Sample configuration ```yaml -# Connected to the Nebula Graph version, set to v3 when connected to 3.x. +# Connected to the NebulaGraph version, set to v3 when connected to 3.x. version: v3 description: example @@ -47,13 +47,13 @@ clientSettings: # Retry times of nGQL statement execution failures. retry: 3 - # Number of Nebula Graph client concurrency. + # Number of NebulaGraph client concurrency. concurrency: 10 - # Cache queue size per Nebula Graph client. + # Cache queue size per NebulaGraph client. channelBufferSize: 128 - # Specifies the Nebula Graph space to import the data into. + # Specifies the NebulaGraph space to import the data into. space: student # Connection information. @@ -63,7 +63,7 @@ clientSettings: address: 192.168.*.13:9669 postStart: - # Configure some of the operations to perform after connecting to the Nebula Graph server, and before inserting data. + # Configure some of the operations to perform after connecting to the NebulaGraph server, and before inserting data. commands: | DROP SPACE IF EXISTS student; CREATE SPACE IF NOT EXISTS student(partition_num=5, replica_factor=1, vid_type=FIXED_STRING(20)); @@ -75,7 +75,7 @@ clientSettings: afterPeriod: 15s preStop: - # Configure some of the actions you performed before disconnecting from the Nebula Graph server. + # Configure some of the actions you performed before disconnecting from the NebulaGraph server. commands: | # Path of the error log file. @@ -123,7 +123,7 @@ files: # The vertex ID corresponds to the column number in the CSV file. Columns in the CSV file are numbered from 0. index: 0 - # The data type of the vertex ID. The optional values are int and string, corresponding to INT64 and FIXED_STRING in the Nebula Graph, respectively. + # The data type of the vertex ID. The optional values are int and string, corresponding to INT64 and FIXED_STRING in the NebulaGraph, respectively. type: string # Tag Settings. diff --git a/docs-2.0/nebula-importer/use-importer.md b/docs-2.0/nebula-importer/use-importer.md index 76d874a6d88..da80f503dd6 100644 --- a/docs-2.0/nebula-importer/use-importer.md +++ b/docs-2.0/nebula-importer/use-importer.md @@ -1,10 +1,10 @@ # Nebula Importer -Nebula Importer (Importer) is a standalone tool for importing data from CSV files into [Nebula Graph](https://github.com/vesoft-inc/nebula). Importer can read the local CSV file and then import the data into the Nebula Graph database. +Nebula Importer (Importer) is a standalone tool for importing data from CSV files into [NebulaGraph](https://github.com/vesoft-inc/nebula). Importer can read the local CSV file and then import the data into the NebulaGraph database. ## Scenario -Importer is used to import the contents of a local CSV file into the Nebula Graph. +Importer is used to import the contents of a local CSV file into the NebulaGraph. ## Advantage @@ -20,21 +20,21 @@ Importer is used to import the contents of a local CSV file into the Nebula Grap Before using Nebula Importer, make sure: -- Nebula Graph service has been deployed. There are currently three deployment modes: +- NebulaGraph service has been deployed. There are currently three deployment modes: - - [Deploy Nebula Graph with Docker Compose](../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md) + - [Deploy NebulaGraph with Docker Compose](../4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md) - - [Install Nebula Graph with RPM or DEB package](../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) + - [Install NebulaGraph with RPM or DEB package](../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md) - - [Install Nebula Graph by compiling the source code](../4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md) + - [Install NebulaGraph by compiling the source code](../4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md) -- Schema is created in Nebula Graph, including space, Tag and Edge type, or set by parameter `clientSettings.postStart.commands`. +- Schema is created in NebulaGraph, including space, Tag and Edge type, or set by parameter `clientSettings.postStart.commands`. - Golang environment has been deployed on the machine running the Importer. For details, see [Build Go environment](https://github.com/vesoft-inc/nebula-importer/blob/{{importer.branch}}/docs/golang-install-en.md). ## Steps -Configure the YAML file and prepare the CSV file to be imported to use the tool to batch write data to Nebula Graph. +Configure the YAML file and prepare the CSV file to be imported to use the tool to batch write data to NebulaGraph. ### Download binary package and run @@ -57,7 +57,7 @@ Configure the YAML file and prepare the CSV file to be imported to use the tool !!! note Use the correct branch. - Nebula Graph 2.x and 3.x have different RPC protocols. + NebulaGraph 2.x and 3.x have different RPC protocols. 2. Access the directory `nebula-importer`. @@ -123,14 +123,14 @@ $ docker run --rm -ti \ - ``: The absolute path to the local YAML configuration file. - ``: The absolute path to the local CSV data file. -- ``: Nebula Graph 2.x Please fill in 'v3'. +- ``: NebulaGraph 2.x Please fill in 'v3'. !!! note A relative path is recommended. If you use a local absolute path, check that the path maps to the path in the Docker. ## Configuration File Description -Nebula Importer uses configuration(`nebula-importer/examples/v2/example.yaml`) files to describe information about the files to be imported, the Nebula Graph server, and more. You can refer to the example configuration file: [Configuration without Header](config-without-header.md)/[Configuration with Header](config-with-header.md). This section describes the fields in the configuration file by category. +Nebula Importer uses configuration(`nebula-importer/examples/v2/example.yaml`) files to describe information about the files to be imported, the NebulaGraph server, and more. You can refer to the example configuration file: [Configuration without Header](config-without-header.md)/[Configuration with Header](config-with-header.md). This section describes the fields in the configuration file by category. !!! note @@ -148,13 +148,13 @@ removeTempFiles: false |Parameter|Default value|Required|Description| |:---|:---|:---|:---| -|`version`|v2|Yes|Target version of Nebula Graph.| +|`version`|v2|Yes|Target version of NebulaGraph.| |`description`|example|No|Description of the configuration file.| |`removeTempFiles`|false|No|Whether to delete temporarily generated logs and error data files.| ### Client configuration -The client configuration stores the configurations associated with Nebula Graph. +The client configuration stores the configurations associated with NebulaGraph. The example configuration is as follows: @@ -182,15 +182,15 @@ clientSettings: |Parameter|Default value|Required|Description| |:---|:---|:---|:---| |`clientSettings.retry`|3|No|Retry times of nGQL statement execution failures.| -|`clientSettings.concurrency`|10|No|Number of Nebula Graph client concurrency.| -|`clientSettings.channelBufferSize`|128|No|Cache queue size per Nebula Graph client.| -|`clientSettings.space`|-|Yes|Specifies the Nebula Graph space to import the data into. Do not import multiple spaces at the same time to avoid performance impact.| -|`clientSettings.connection.user`|-|Yes|Nebula Graph user name.| -|`clientSettings.connection.password`|-|Yes|The password for the Nebula Graph user name.| +|`clientSettings.concurrency`|10|No|Number of NebulaGraph client concurrency.| +|`clientSettings.channelBufferSize`|128|No|Cache queue size per NebulaGraph client.| +|`clientSettings.space`|-|Yes|Specifies the NebulaGraph space to import the data into. Do not import multiple spaces at the same time to avoid performance impact.| +|`clientSettings.connection.user`|-|Yes|NebulaGraph user name.| +|`clientSettings.connection.password`|-|Yes|The password for the NebulaGraph user name.| |`clientSettings.connection.address`|-|Yes|Addresses and ports for all Graph services.| -|`clientSettings.postStart.commands`|-|No|Configure some of the operations to perform after connecting to the Nebula Graph server, and before inserting data.| +|`clientSettings.postStart.commands`|-|No|Configure some of the operations to perform after connecting to the NebulaGraph server, and before inserting data.| |`clientSettings.postStart.afterPeriod`|-|No|The interval, between executing the above `commands` and executing the insert data command, such as `8s`.| -|`clientSettings.preStop.commands`|-|No|Configure some of the actions you performed before disconnecting from the Nebula Graph server.| +|`clientSettings.preStop.commands`|-|No|Configure some of the actions you performed before disconnecting from the NebulaGraph server.| ### File configuration @@ -263,7 +263,7 @@ schema: |`files.schema.vertex.vid.type`|-|No|The data type of the vertex ID. Possible values are `int` and `string`.| |`files.schema.vertex.vid.index`|-|No|The vertex ID corresponds to the column number in the CSV file.| |`files.schema.vertex.tags.name`|-|Yes|Tag name.| -|`files.schema.vertex.tags.props.name`|-|Yes|Tag property name, which must match the Tag property in the Nebula Graph.| +|`files.schema.vertex.tags.props.name`|-|Yes|Tag property name, which must match the Tag property in the NebulaGraph.| |`files.schema.vertex.tags.props.type`|-|Yes|Property data type, supporting `bool`, `int`, `float`, `double`, `timestamp` and `string`.| |`files.schema.vertex.tags.props.index`|-|No|Property corresponds to the sequence number of the column in the CSV file.| @@ -303,7 +303,7 @@ schema: |`files.schema.edge.dstVID.type`|-|No|The data type of the destination vertex ID of the edge.| |`files.schema.edge.dstVID.index`|-|No|The destination vertex ID of the edge corresponds to the column number in the CSV file.| |`files.schema.edge.rank.index`|-|No|The rank value of the edge corresponds to the column number in the CSV file.| -|`files.schema.edge.props.name`|-|Yes|The Edge Type property name must match the Edge Type property in the Nebula Graph.| +|`files.schema.edge.props.name`|-|Yes|The Edge Type property name must match the Edge Type property in the NebulaGraph.| |`files.schema.edge.props.type`|-|Yes|Property data type, supporting `bool`, `int`, `float`, `double`, `timestamp` and `string`.| |`files.schema.edge.props.index`|-|No|Property corresponds to the sequence number of the column in the CSV file.| diff --git a/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md b/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md index a30808415a2..d51eb092780 100644 --- a/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md +++ b/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md @@ -2,35 +2,35 @@ ## Concept of Nebula Operator -Nebula Operator is a tool to automate the deployment, operation, and maintenance of [Nebula Graph](https://github.com/vesoft-inc/nebula) clusters on [Kubernetes](https://kubernetes.io). Building upon the excellent scalability mechanism of Kubernetes, Nebula Graph introduced its operation and maintenance knowledge into the Kubernetes system, which makes Nebula Graph a real [cloud-native graph database](https://www.nebula-cloud.io/). +Nebula Operator is a tool to automate the deployment, operation, and maintenance of [NebulaGraph](https://github.com/vesoft-inc/nebula) clusters on [Kubernetes](https://kubernetes.io). Building upon the excellent scalability mechanism of Kubernetes, NebulaGraph introduced its operation and maintenance knowledge into the Kubernetes system, which makes NebulaGraph a real [cloud-native graph database](https://www.nebula-cloud.io/). ## How it works For resource types that do not exist within Kubernetes,you can register them by adding custom API objects. The common way is to use the [CustomResourceDefinition](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions). -Nebula Operator abstracts the deployment management of Nebula Graph clusters as a CRD. By combining multiple built-in API objects including StatefulSet, Service, and ConfigMap, the routine management and maintenance of a Nebula Graph cluster are coded as a control loop in the Kubernetes system. When a CR instance is submitted, Nebula Operator drives database clusters to the final state according to the control process. +Nebula Operator abstracts the deployment management of NebulaGraph clusters as a CRD. By combining multiple built-in API objects including StatefulSet, Service, and ConfigMap, the routine management and maintenance of a NebulaGraph cluster are coded as a control loop in the Kubernetes system. When a CR instance is submitted, Nebula Operator drives database clusters to the final state according to the control process. ## Features of Nebula Operator The following features are already available in Nebula Operator: -- **Deploy and uninstall clusters**: Nebula Operator simplifies the process of deploying and uninstalling clusters for users. Nebula Operator allows you to quickly create, update, or delete a Nebula Graph cluster by simply providing the corresponding CR file. For more information, see [Deploy Nebula Graph Clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy Nebula Graph Clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). +- **Deploy and uninstall clusters**: Nebula Operator simplifies the process of deploying and uninstalling clusters for users. Nebula Operator allows you to quickly create, update, or delete a NebulaGraph cluster by simply providing the corresponding CR file. For more information, see [Deploy NebulaGraph Clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph Clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). -- **Scale clusters**: Nebula Operator calls Nebula Graph's native scaling interfaces in a control loop to implement the scaling logic. You can simply perform scaling operations with YAML configurations and ensure the stability of data. For more information, see [Scale clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Scale clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). +- **Scale clusters**: Nebula Operator calls NebulaGraph's native scaling interfaces in a control loop to implement the scaling logic. You can simply perform scaling operations with YAML configurations and ensure the stability of data. For more information, see [Scale clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Scale clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). - **Cluster Upgrade**: Nebula Operator supports cluster upgrading from version {{operator.upgrade_from}} to version {{operator.upgrade_to}}. -- **Self-Healing**: Nebula Operator calls interfaces provided by Nebula Graph clusters to dynamically sense cluster service status. Once an exception is detected, Nebula Operator performs fault tolerance. For more information, see [Self-Healing](5.operator-failover.md). +- **Self-Healing**: Nebula Operator calls interfaces provided by NebulaGraph clusters to dynamically sense cluster service status. Once an exception is detected, Nebula Operator performs fault tolerance. For more information, see [Self-Healing](5.operator-failover.md). -- **Balance Scheduling**: Based on the scheduler extension interface, the scheduler provided by Nebula Operator evenly distributes Pods in a Nebula Graph cluster across all nodes. +- **Balance Scheduling**: Based on the scheduler extension interface, the scheduler provided by Nebula Operator evenly distributes Pods in a NebulaGraph cluster across all nodes. ## Limitations ### Version limitations -Nebula Operator does not support the v1.x version of Nebula Graph. Nebula Operator version and the corresponding Nebula Graph version are as follows: +Nebula Operator does not support the v1.x version of NebulaGraph. Nebula Operator version and the corresponding NebulaGraph version are as follows: -| Nebula Operator version | Nebula Graph version | +| Nebula Operator version | NebulaGraph version | | ------------------- | ---------------- | | 1.1.0| 3.0.0 ~ 3.1.x | | 1.0.0| 3.0.0 ~ 3.1.x | @@ -39,12 +39,12 @@ Nebula Operator does not support the v1.x version of Nebula Graph. Nebula Operat !!! Compatibility "Legacy version compatibility" - - The 1.x version Nebula Operator is not compatible with Nebula Graph of version below v3.x. - - Starting from Nebula Operator 0.9.0, logs and data are stored separately. Using Nebula Operator 0.9.0 or later versions to manage a Nebula Graph 2.5.x cluster created with Operator 0.8.0 can cause compatibility issues. You can backup the data of the Nebula Graph 2.5.x cluster and then create a 2.6.x cluster with Operator 0.9.0. + - The 1.x version Nebula Operator is not compatible with NebulaGraph of version below v3.x. + - Starting from Nebula Operator 0.9.0, logs and data are stored separately. Using Nebula Operator 0.9.0 or later versions to manage a NebulaGraph 2.5.x cluster created with Operator 0.8.0 can cause compatibility issues. You can backup the data of the NebulaGraph 2.5.x cluster and then create a 2.6.x cluster with Operator 0.9.0. ### Feature limitations -The Nebula Operator scaling feature is only available for the Enterprise Edition of Nebula Graph clusters and does not support scaling the Community Edition version of Nebula Graph clusters. +The Nebula Operator scaling feature is only available for the Enterprise Edition of NebulaGraph clusters and does not support scaling the Community Edition version of NebulaGraph clusters. ## Release note diff --git a/docs-2.0/nebula-operator/2.deploy-nebula-operator.md b/docs-2.0/nebula-operator/2.deploy-nebula-operator.md index 6cbbf2e4772..7212ce57af5 100644 --- a/docs-2.0/nebula-operator/2.deploy-nebula-operator.md +++ b/docs-2.0/nebula-operator/2.deploy-nebula-operator.md @@ -4,7 +4,7 @@ You can deploy Nebula Operator with [Helm](https://helm.sh/). ## Background -[Nebula Operator](1.introduction-to-nebula-operator.md) automates the management of Nebula Graph clusters, and eliminates the need for you to install, scale, upgrade, and uninstall Nebula Graph clusters, which lightens the burden on managing different application versions. +[Nebula Operator](1.introduction-to-nebula-operator.md) automates the management of NebulaGraph clusters, and eliminates the need for you to install, scale, upgrade, and uninstall NebulaGraph clusters, which lightens the burden on managing different application versions. ## Prerequisites @@ -30,9 +30,9 @@ If using a role-based access control policy, you need to enable [RBAC](https://k - [CoreDNS](https://coredns.io/) - CoreDNS is a flexible and scalable DNS server that is [installed](https://github.com/coredns/deployment/tree/master/kubernetes) for Pods in Nebula Graph clusters. + CoreDNS is a flexible and scalable DNS server that is [installed](https://github.com/coredns/deployment/tree/master/kubernetes) for Pods in NebulaGraph clusters. - Components in a Nebula Graph cluster communicate with each other via DNS resolutions for domain names, like `x.default.svc.cluster.local`. + Components in a NebulaGraph cluster communicate with each other via DNS resolutions for domain names, like `x.default.svc.cluster.local`. - [cert-manager](https://cert-manager.io/) @@ -42,7 +42,7 @@ If using a role-based access control policy, you need to enable [RBAC](https://k cert-manager is a tool that automates the management of certificates. It leverages extensions of the Kubernetes API and uses the Webhook server to provide dynamic access control to cert-manager resources. For more information about installation, see [cert-manager installation documentation](https://cert-manager.io/docs/installation/kubernetes/). - cert-manager is used to validate the numeric value of replicas for each component in a Nebula Graph cluster. If you run it in a production environment and care about the high availability of Nebula Graph clusters, it is recommended to set the value of `admissionWebhook.create` to `true` before installing cert-manager. + cert-manager is used to validate the numeric value of replicas for each component in a NebulaGraph cluster. If you run it in a production environment and care about the high availability of NebulaGraph clusters, it is recommended to set the value of `admissionWebhook.create` to `true` before installing cert-manager. - [OpenKruise](https://openkruise.io/en-us/) @@ -188,7 +188,7 @@ For more information about `helm install`, see [Helm Install](https://helm.sh/do !!! Compatibility "Legacy version compatibility" - Does not support upgrading 0.9.0 and below version NebulaGraph Operator to 1.x. - - The 1.x version Nebula Operator is not compatible with Nebula Graph of version below v3.x. + - The 1.x version Nebula Operator is not compatible with NebulaGraph of version below v3.x. 1. Update the information of available charts locally from chart repositories. @@ -225,7 +225,7 @@ For more information about `helm install`, see [Helm Install](https://helm.sh/do 3. Pull the latest CRD configuration file. !!! note - You need to upgrade the corresponding CRD configurations after Nebula Operator is upgraded. Otherwise, the creation of Nebula Graph clusters will fail. For information about the CRD configurations, see [apps.nebula-graph.io_nebulaclusters.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.tag}}/config/crd/bases/apps.nebula-graph.io_nebulaclusters.yaml). + You need to upgrade the corresponding CRD configurations after Nebula Operator is upgraded. Otherwise, the creation of NebulaGraph clusters will fail. For information about the CRD configurations, see [apps.nebula-graph.io_nebulaclusters.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.tag}}/config/crd/bases/apps.nebula-graph.io_nebulaclusters.yaml). 1. Pull the Nebula Operator chart package. @@ -274,4 +274,4 @@ For more information about `helm install`, see [Helm Install](https://helm.sh/do ## What's next -Automate the deployment of Nebula Graph clusters with Nebula Operator. For more information, see [Deploy Nebula Graph Clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy Nebula Graph Clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). +Automate the deployment of NebulaGraph clusters with Nebula Operator. For more information, see [Deploy NebulaGraph Clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph Clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md index 1bb4610b3c4..d2c5473155a 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md @@ -1,21 +1,21 @@ -# Deploy Nebula Graph clusters with Kubectl +# Deploy NebulaGraph clusters with Kubectl !!! Compatibility "Legacy version compatibility" - The 1.x version Nebula Operator is not compatible with Nebula Graph of version below v3.x. + The 1.x version Nebula Operator is not compatible with NebulaGraph of version below v3.x. ## Prerequisites - [Install Nebula Operator](../2.deploy-nebula-operator.md) -- You have prepared the license file for Nebula Graph Enterprise Edition clusters. +- You have prepared the license file for NebulaGraph Enterprise Edition clusters. !!! enterpriseonly - The license file is required only when creating a Nebula Graph Enterprise Edition cluster. + The license file is required only when creating a NebulaGraph Enterprise Edition cluster. ## Create clusters -The following example shows how to create a Nebula Graph cluster by creating a cluster named `nebula`. +The following example shows how to create a NebulaGraph cluster by creating a cluster named `nebula`. 1. Create a file named `apps_v1alpha1_nebulacluster.yaml`. @@ -100,7 +100,7 @@ The following example shows how to create a Nebula Graph cluster by creating a c === "Enterprise Edition" ```yaml - # Contact our sales team to get a complete Nebula Graph Enterprise Edition cluster YAML example. + # Contact our sales team to get a complete NebulaGraph Enterprise Edition cluster YAML example. apiVersion: apps.nebula-graph.io/v1alpha1 kind: NebulaCluster @@ -215,7 +215,7 @@ The following example shows how to create a Nebula Graph cluster by creating a c | Parameter | Default value | Description | | :---- | :--- | :--- | - | `metadata.name` | - | The name of the created Nebula Graph cluster. | + | `metadata.name` | - | The name of the created NebulaGraph cluster. | | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | | `spec.graphd.images` | `vesoft/nebula-graphd` | The container image of the Graphd service. | | `spec.graphd.version` | `{{nebula.tag}}` | The version of the Graphd service. | @@ -233,14 +233,14 @@ The following example shows how to create a Nebula Graph cluster by creating a c | `spec.storaged.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Storaged service.| | `spec.reference.name` | - | The name of the dependent controller. | | `spec.schedulerName` | - | The scheduler name. | - | `spec.imagePullPolicy` | The image policy to pull the Nebula Graph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | - | `spec.metad.license` | - | The configuration of the license for creating a Nebula Graph Enterprise Edition cluster. | + | `spec.imagePullPolicy` | The image policy to pull the NebulaGraph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | + | `spec.metad.license` | - | The configuration of the license for creating a NebulaGraph Enterprise Edition cluster. | !!! enterpriseonly - Make sure that you have access to Nebula Graph Enterprise Edition images before pulling the image. For details, contact our sales team ([inqury@vesoft.com](mailto:inqury@vesoft.com)) + Make sure that you have access to NebulaGraph Enterprise Edition images before pulling the image. For details, contact our sales team ([inqury@vesoft.com](mailto:inqury@vesoft.com)) -2. Create a Nebula Graph cluster. +2. Create a NebulaGraph cluster. ```bash kubectl create -f apps_v1alpha1_nebulacluster.yaml @@ -258,7 +258,7 @@ The following example shows how to create a Nebula Graph cluster by creating a c - This step is required only for creating a Nebula Grpah Enterprise Edition cluster. - - Ignore this step if you are creating a Nebula Graph Community Edition cluster. + - Ignore this step if you are creating a NebulaGraph Community Edition cluster. ```bash @@ -271,7 +271,7 @@ The following example shows how to create a Nebula Graph cluster by creating a c kubectl get secrets nebula-license -o yaml ``` -4. Check the status of the Nebula Graph cluster. +4. Check the status of the NebulaGraph cluster. ```bash kubectl get nebulaclusters.apps.nebula-graph.io nebula @@ -288,14 +288,14 @@ The following example shows how to create a Nebula Graph cluster by creating a c !!! enterpriseonly - - The cluster scaling feature is for Nebula Graph Enterprise Edition only. - - Scaling a Nebula Graph cluster for Enterprise Edition is supported only with Nebula Operator version 1.1.0 or later. + - The cluster scaling feature is for NebulaGraph Enterprise Edition only. + - Scaling a NebulaGraph cluster for Enterprise Edition is supported only with Nebula Operator version 1.1.0 or later. -You can modify the value of `replicas` in `apps_v1alpha1_nebulacluster.yaml` to scale a Nebula Graph cluster. +You can modify the value of `replicas` in `apps_v1alpha1_nebulacluster.yaml` to scale a NebulaGraph cluster. ### Scale out clusters -The following shows how to scale out a Nebula Graph cluster by changing the number of Storage services to 5: +The following shows how to scale out a NebulaGraph cluster by changing the number of Storage services to 5: 1. Change the value of the `storaged.replicas` from `3` to `5` in `apps_v1alpha1_nebulacluster.yaml`. @@ -327,7 +327,7 @@ The following shows how to scale out a Nebula Graph cluster by changing the numb schedulerName: default-scheduler ``` -2. Run the following command to update the Nebula Graph cluster CR. +2. Run the following command to update the NebulaGraph cluster CR. ```bash kubectl apply -f apps_v1alpha1_nebulacluster.yaml @@ -364,7 +364,7 @@ The principle of scaling in a cluster is the same as scaling out a cluster. You ## Delete clusters -Run the following command to delete a Nebula Graph cluster with Kubectl: +Run the following command to delete a NebulaGraph cluster with Kubectl: ```bash kubectl delete -f apps_v1alpha1_nebulacluster.yaml @@ -372,4 +372,4 @@ kubectl delete -f apps_v1alpha1_nebulacluster.yaml ## What's next -[Connect to Nebula Graph databases](../4.connect-to-nebula-graph-service.md) +[Connect to NebulaGraph databases](../4.connect-to-nebula-graph-service.md) diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md index 31104bbdf54..b6852d1b2b7 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md @@ -1,8 +1,8 @@ -# Deploy Nebula Graph clusters with Helm +# Deploy NebulaGraph clusters with Helm !!! Compatibility "Legacy version compatibility" - The 1.x version Nebula Operator is not compatible with Nebula Graph of version below v3.x. + The 1.x version Nebula Operator is not compatible with NebulaGraph of version below v3.x. ## Prerequisite @@ -25,18 +25,18 @@ 3. Set environment variables to your desired values. ```bash - export NEBULA_CLUSTER_NAME=nebula # The desired Nebula Graph cluster name. - export NEBULA_CLUSTER_NAMESPACE=nebula # The desired namespace where your Nebula Graph cluster locates. - export STORAGE_CLASS_NAME=gp2 # The desired StorageClass name in your Nebula Graph cluster. + export NEBULA_CLUSTER_NAME=nebula # The desired NebulaGraph cluster name. + export NEBULA_CLUSTER_NAMESPACE=nebula # The desired namespace where your NebulaGraph cluster locates. + export STORAGE_CLASS_NAME=gp2 # The desired StorageClass name in your NebulaGraph cluster. ``` -4. Create a namespace for your Nebula Graph cluster(If you have created one, skip this step). +4. Create a namespace for your NebulaGraph cluster(If you have created one, skip this step). ```bash kubectl create namespace "${NEBULA_CLUSTER_NAMESPACE}" ``` -5. Apply the variables to the Helm chart to create a Nebula Graph cluster. +5. Apply the variables to the Helm chart to create a NebulaGraph cluster. ```bash helm install "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ @@ -45,7 +45,7 @@ --set nebula.storageClassName="${STORAGE_CLASS_NAME}" ``` -6. Check the status of the Nebula Graph cluster you created. +6. Check the status of the NebulaGraph cluster you created. ```bash kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" get pod -l "app.kubernetes.io/cluster=${NEBULA_CLUSTER_NAME}" @@ -69,12 +69,12 @@ !!! enterpriseonly - - The cluster scaling feature is for Nebula Graph Enterprise Edition only. - - Scaling a Nebula Graph cluster for Enterprise Edition is supported only with Nebula Operator version 1.1.0 or later. + - The cluster scaling feature is for NebulaGraph Enterprise Edition only. + - Scaling a NebulaGraph cluster for Enterprise Edition is supported only with Nebula Operator version 1.1.0 or later. -You can scale a Nebula Graph cluster by defining the value of the `replicas` corresponding to the different services in the cluster. +You can scale a NebulaGraph cluster by defining the value of the `replicas` corresponding to the different services in the cluster. -For example, run the following command to scale out a Nebula Graph cluster by changing the number of Storage services from 2 (the original value) to 5: +For example, run the following command to scale out a NebulaGraph cluster by changing the number of Storage services from 2 (the original value) to 5: ```bash helm upgrade "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ @@ -84,7 +84,7 @@ helm upgrade "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ --set nebula.storaged.replicas=5 ``` -Similarly, you can scale in a Nebula Graph cluster by setting the value of the `replicas` corresponding to the different services in the cluster smaller than the original value. +Similarly, you can scale in a NebulaGraph cluster by setting the value of the `replicas` corresponding to the different services in the cluster smaller than the original value. !!! caution @@ -94,38 +94,38 @@ You can click on [nebula-cluster/values.yaml](https://github.com/vesoft-inc/nebu ## Delete clusters -Run the following command to delete a Nebula Graph cluster with Helm: +Run the following command to delete a NebulaGraph cluster with Helm: ```bash helm uninstall "${NEBULA_CLUSTER_NAME}" --namespace="${NEBULA_CLUSTER_NAMESPACE}" ``` -Or use variable values to delete a Nebula Graph cluster with Helm: +Or use variable values to delete a NebulaGraph cluster with Helm: ```bash helm uninstall nebula --namespace=nebula ## What's next -[Connect to Nebula Graph Databases](../4.connect-to-nebula-graph-service.md) +[Connect to NebulaGraph Databases](../4.connect-to-nebula-graph-service.md) ## Configuration parameters of the nebula-cluster Helm chart | Parameter | Default value | Description | | :-------------------------- | :----------------------------------------------------------- | ------------------------------------------------------------ | | `nameOverride` | `nil` | Replaces the name of the chart in the `Chart.yaml` file. | -| `nebula.version` | `{{nebula.tag}}` | The version of Nebula Graph. | -| `nebula.imagePullPolicy` | `IfNotPresent` | The Nebula Graph image pull policy. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | +| `nebula.version` | `{{nebula.tag}}` | The version of NebulaGraph. | +| `nebula.imagePullPolicy` | `IfNotPresent` | The NebulaGraph image pull policy. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | | `nebula.storageClassName` | `nil` | The StorageClass name. StorageClass is the default persistent volume type. | -| `nebula.schedulerName` | `default-scheduler` | The scheduler name of a Nebula Graph cluster. | -| `nebula.reference` | `{"name": "statefulsets.apps", "version": "v1"}` | The workload referenced for a Nebula Graph cluster. | +| `nebula.schedulerName` | `default-scheduler` | The scheduler name of a NebulaGraph cluster. | +| `nebula.reference` | `{"name": "statefulsets.apps", "version": "v1"}` | The workload referenced for a NebulaGraph cluster. | | `nebula.graphd.image` | `vesoft/nebula-graphd` | The image name for a Graphd service. Uses the value of `nebula.version` as its version. | | `nebula.graphd.replicas` | `2` | The number of the Graphd service. | | `nebula.graphd.env` | `[]` | The environment variables for the Graphd service. | | `nebula.graphd.resources` | `{"resources":{"requests":{"cpu":"500m","memory":"500Mi"},"limits":{"cpu":"1","memory":"1Gi"}}}` | The resource configurations for the Graphd service. | | `nebula.graphd.logStorage` | `500Mi` | The log disk storage capacity for the Graphd service. | -| `nebula.graphd.podLabels` | `{}` | Labels for the Graphd pod in a Nebula Graph cluster. | -| `nebula.graphd.podAnnotations` | `{}` | Pod annotations for the Graphd pod in a Nebula Graph cluster. | +| `nebula.graphd.podLabels` | `{}` | Labels for the Graphd pod in a NebulaGraph cluster. | +| `nebula.graphd.podAnnotations` | `{}` | Pod annotations for the Graphd pod in a NebulaGraph cluster. | | `nebula.graphd.nodeSelector` | `{}` |Labels for the Graphd pod to be scheduled to the specified node. | | `nebula.graphd.tolerations` | `{}` |Tolerations for the Graphd pod. | | `nebula.graphd.affinity` | `{}` |Affinity for the Graphd pod. | @@ -138,8 +138,8 @@ helm uninstall nebula --namespace=nebula | `nebula.metad.resources` | `{"resources":{"requests":{"cpu":"500m","memory":"500Mi"},"limits":{"cpu":"1","memory":"1Gi"}}}` | The resource configurations for the Metad service. | | `nebula.metad.logStorage` | `500Mi` | The log disk capacity for the Metad service. | | `nebula.metad.dataStorage` | `1Gi` | The data disk capacity for the Metad service. | -| `nebula.metad.podLabels` | `{}` | Labels for the Metad pod in a Nebula Graph cluster. | -| `nebula.metad.podAnnotations` | `{}` | Pod annotations for the Metad pod in a Nebula Graph cluster. | +| `nebula.metad.podLabels` | `{}` | Labels for the Metad pod in a NebulaGraph cluster. | +| `nebula.metad.podAnnotations` | `{}` | Pod annotations for the Metad pod in a NebulaGraph cluster. | | `nebula.metad.nodeSelector` | `{}` | Labels for the Metad pod to be scheduled to the specified node. | | `nebula.metad.tolerations` | `{}` | Tolerations for the Metad pod. | | `nebula.metad.affinity` | `{}` | Affinity for the Metad pod. | @@ -152,12 +152,12 @@ helm uninstall nebula --namespace=nebula | `nebula.storaged.resources` | `{"resources":{"requests":{"cpu":"500m","memory":"500Mi"},"limits":{"cpu":"1","memory":"1Gi"}}}` | The resource configurations for Storagedss services. | | `nebula.storaged.logStorage` | `500Mi` | The log disk capacity for the Metad service. | | `nebula.storaged.dataStorage` | `1Gi` | The data disk capacity for the Metad service. | -| `nebula.storaged.podLabels` | `{}` | Labels for the Metad pod in a Nebula Graph cluster. | -| `nebula.storaged.podAnnotations` |`{}` | Pod annotations for the Metad pod in a Nebula Graph cluster. | +| `nebula.storaged.podLabels` | `{}` | Labels for the Metad pod in a NebulaGraph cluster. | +| `nebula.storaged.podAnnotations` |`{}` | Pod annotations for the Metad pod in a NebulaGraph cluster. | | `nebula.storaged.nodeSelector` | `{}` | Labels for the Metad pod to be scheduled to the specified node. | | `nebula.storaged.tolerations` | `{}` | Tolerations for the Metad pod. | | `nebula.storaged.affinity` | `{}` | Affinity for the Metad pod. | | `nebula.storaged.readinessProbe` | `{}` | ReadinessProbe for the Metad pod. | | `nebula.storaged.sidecarContainers` | `{}` | Sidecar containers for the Metad pod. | | `nebula.storaged.sidecarVolumes` | `{}` | Sidecar volumes for the Metad pod. | -| `imagePullSecrets` | `[]` | The Secret to pull the Nebula Graph cluster image. | \ No newline at end of file +| `imagePullSecrets` | `[]` | The Secret to pull the NebulaGraph cluster image. | \ No newline at end of file diff --git a/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md b/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md index 644bb620fef..280a259d16c 100644 --- a/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md +++ b/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md @@ -1,14 +1,14 @@ -# Connect to Nebula Graph databases with Nebular Operator +# Connect to NebulaGraph databases with Nebular Operator -After creating a Nebula Graph cluster with Nebula Operator on Kubernetes, you can connect to Nebula Graph databases from within the cluster and outside the cluster. +After creating a NebulaGraph cluster with Nebula Operator on Kubernetes, you can connect to NebulaGraph databases from within the cluster and outside the cluster. ## Prerequisites -Create a Nebula Graph cluster with Nebula Operator on Kubernetes. For more information, see [Deploy Nebula Graph clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy Nebula Graph clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). +Create a NebulaGraph cluster with Nebula Operator on Kubernetes. For more information, see [Deploy NebulaGraph clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). -## Connect to Nebula Graph databases from within a Nebula Graph cluster +## Connect to NebulaGraph databases from within a NebulaGraph cluster -When a Nebula Graph cluster is created, Nebula Operator automatically creates a Service named `-graphd-svc` with the type `ClusterIP` under the same namespace. With the IP of the Service and the port number of the Nebula Graph database, you can connect to the Nebula Graph database. +When a NebulaGraph cluster is created, Nebula Operator automatically creates a Service named `-graphd-svc` with the type `ClusterIP` under the same namespace. With the IP of the Service and the port number of the NebulaGraph database, you can connect to the NebulaGraph database. 1. Run the following command to check the IP of the Service: @@ -22,7 +22,7 @@ When a Nebula Graph cluster is created, Nebula Operator automatically creates a Services of the `ClusterIP` type only can be accessed by other applications in a cluster. For more information, see [ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/). -2. Run the following command to connect to the Nebula Graph database using the IP of the `-graphd-svc` Service above: +2. Run the following command to connect to the NebulaGraph database using the IP of the `-graphd-svc` Service above: ```bash kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -port -u -p @@ -33,12 +33,12 @@ When a Nebula Graph cluster is created, Nebula Operator automatically creates a ```bash kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 10.98.213.34 -port 9669 -u root -p vesoft - - `--image`: The image for the tool Nebula Console used to connect to Nebula Graph databases. + - `--image`: The image for the tool Nebula Console used to connect to NebulaGraph databases. - ``: The custom Pod name. - `-addr`: The IP of the `ClusterIP` Service, used to connect to Graphd services. - `-port`: The port to connect to Graphd services, the default port of which is 9669. - - `-u`: The username of your Nebula Graph account. Before enabling authentication, you can use any existing username. The default username is root. - - `-p`: The password of your Nebula Graph account. Before enabling authentication, you can use any characters as the password. + - `-u`: The username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. + - `-p`: The password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. A successful connection to the database is indicated if the following is returned: @@ -48,7 +48,7 @@ When a Nebula Graph cluster is created, Nebula Operator automatically creates a (root@nebula) [(none)]> ``` -You can also connect to Nebula Graph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`: +You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`: ```bash kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port -u -p @@ -56,9 +56,9 @@ kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- The default value of `CLUSTER_DOMAIN` is `cluster.local`. -## Connect to Nebula Graph databases from outside a Nebula Graph cluster via `NodePort` +## Connect to NebulaGraph databases from outside a NebulaGraph cluster via `NodePort` -You can create a Service of type `NodePort` to connect to Nebula Graph databases from outside a Nebula Graph cluster with a node IP and an exposed node port. You can also use load balancing software provided by cloud providers (such as Azure, AWS, etc.) and set the Service of type `LoadBalancer`. +You can create a Service of type `NodePort` to connect to NebulaGraph databases from outside a NebulaGraph cluster with a node IP and an exposed node port. You can also use load balancing software provided by cloud providers (such as Azure, AWS, etc.) and set the Service of type `LoadBalancer`. The Service of type `NodePort` forwards the front-end requests via the label selector `spec.selector` to Graphd pods with labels `app.kubernetes.io/cluster: ` and `app.kubernetes.io/component: graphd`. @@ -96,7 +96,7 @@ Steps: type: NodePort ``` - - Nebula Graph uses port `9669` by default. `19669` is the port of the Graph service in a Nebula Graph cluster. + - NebulaGraph uses port `9669` by default. `19669` is the port of the Graph service in a NebulaGraph cluster. - The value of `targetPort` is the port mapped to the database Pods, which can be customized. 2. Run the following command to create a NodePort Service. @@ -121,9 +121,9 @@ Steps: nebula-storaged-headless ClusterIP None 9779/TCP,19779/TCP,19780/TCP,9778/TCP 23h ``` - As you see, the mapped port of Nebula Graph databases on all cluster nodes is `32236`. + As you see, the mapped port of NebulaGraph databases on all cluster nodes is `32236`. -4. Connect to Nebula Graph databases with your node IP and the node port above. +4. Connect to NebulaGraph databases with your node IP and the node port above. ```bash kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -port -u -p @@ -138,18 +138,18 @@ Steps: (root@nebula) [(none)]> ``` - - `--image`: The image for the tool Nebula Console used to connect to Nebula Graph databases. + - `--image`: The image for the tool Nebula Console used to connect to NebulaGraph databases. - ``: The custom Pod name. The above example uses `nebula-console2`. - - `-addr`: The IP of any node in a Nebula Graph cluster. The above example uses `192.168.8.24`. - - `-port`: The mapped port of Nebula Graph databases on all cluster nodes. The above example uses `32236`. - - `-u`: The username of your Nebula Graph account. Before enabling authentication, you can use any existing username. The default username is root. - - `-p`: The password of your Nebula Graph account. Before enabling authentication, you can use any characters as the password. + - `-addr`: The IP of any node in a NebulaGraph cluster. The above example uses `192.168.8.24`. + - `-port`: The mapped port of NebulaGraph databases on all cluster nodes. The above example uses `32236`. + - `-u`: The username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. + - `-p`: The password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. -## Connect to Nebula Graph databases from outside a Nebula Graph cluster via Ingress +## Connect to NebulaGraph databases from outside a NebulaGraph cluster via Ingress Nginx Ingress is an implementation of Kubernetes Ingress. Nginx Ingress watches the Ingress resource of a Kubernetes cluster and generates the Ingress rules into Nginx configurations that enable Nginx to forward 7 layers of traffic. -You can use Nginx Ingress to connect to a Nebula Graph cluster from outside the cluster using a combination of the HostNetwork and DaemonSet pattern. +You can use Nginx Ingress to connect to a NebulaGraph cluster from outside the cluster using a combination of the HostNetwork and DaemonSet pattern. As HostNetwork is used, the Nginx Ingress pod cannot be scheduled to the same node. To avoid listening port conflicts, some nodes can be selected and labeled as edge nodes in advance, which are specially used for the Nginx Ingress deployment. Nginx Ingress is then deployed on these nodes in a DaemonSet mode. @@ -207,9 +207,9 @@ Steps are as follows. daemonset.apps/nginx-ingress-controller created ``` - Since the network type that is configured in Nginx Ingress is `hostNetwork`, after successfully deploying Nginx Ingress, with the IP (`192.168.8.160`) of the node where Nginx Ingress is deployed and with the external port (`9769`) you define, you can access Nebula Graph. + Since the network type that is configured in Nginx Ingress is `hostNetwork`, after successfully deploying Nginx Ingress, with the IP (`192.168.8.160`) of the node where Nginx Ingress is deployed and with the external port (`9769`) you define, you can access NebulaGraph. -4. Use the IP address and the port configured in the preceding steps. You can connect to Nebula Graph with Nebula Console. +4. Use the IP address and the port configured in the preceding steps. You can connect to NebulaGraph with Nebula Console. ```bash kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -port -u -p @@ -221,12 +221,12 @@ Steps are as follows. kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 192.168.8.160 -port 9769 -u root -p vesoft ``` - - `--image`: The image for the tool Nebula Console used to connect to Nebula Graph databases. + - `--image`: The image for the tool Nebula Console used to connect to NebulaGraph databases. - `` The custom Pod name. The above example uses `nebula-console`. - `-addr`: The IP of the node where Nginx Ingress is deployed. The above example uses `192.168.8.160`. - `-port`: The port used for external network access. The above example uses `9769`. - - `-u`: The username of your Nebula Graph account. Before enabling authentication, you can use any existing username. The default username is root. - - `-p`: The password of your Nebula Graph account. Before enabling authentication, you can use any characters as the password. + - `-u`: The username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. + - `-p`: The password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. A successful connection to the database is indicated if the following is returned: diff --git a/docs-2.0/nebula-operator/5.operator-failover.md b/docs-2.0/nebula-operator/5.operator-failover.md index 859a00aa4b0..4a0045f51f9 100644 --- a/docs-2.0/nebula-operator/5.operator-failover.md +++ b/docs-2.0/nebula-operator/5.operator-failover.md @@ -1,6 +1,6 @@ # Self-healing -Nebula Operator calls the interface provided by Nebula Graph clusters to dynamically sense cluster service status. Once an exception is detected (for example, a component in a Nebula Graph cluster stops running), Nebula Operator automatically performs fault tolerance. This topic shows how Nebular Operator performs self-healing by simulating cluster failure of deleting one Storage service Pod in a Nebula Graph cluster. +Nebula Operator calls the interface provided by NebulaGraph clusters to dynamically sense cluster service status. Once an exception is detected (for example, a component in a NebulaGraph cluster stops running), Nebula Operator automatically performs fault tolerance. This topic shows how Nebular Operator performs self-healing by simulating cluster failure of deleting one Storage service Pod in a NebulaGraph cluster. ## Prerequisites @@ -8,14 +8,14 @@ Nebula Operator calls the interface provided by Nebula Graph clusters to dynamic ## Steps -1. Create a Nebula Graph cluster. For more information, see [Deploy Nebula Graph clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy Nebula Graph clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). +1. Create a NebulaGraph cluster. For more information, see [Deploy NebulaGraph clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). 2. Delete the Pod named `-storaged-2` after all pods are in the `Running` status. ```bash kubectl delete pod -storaged-2 --now ``` -`` is the name of your Nebula Graph cluster. +`` is the name of your NebulaGraph cluster. 3. Nebula Operator automates the creation of the Pod named `-storaged-2` to perform self-healing. diff --git a/docs-2.0/nebula-operator/6.get-started-with-operator.md b/docs-2.0/nebula-operator/6.get-started-with-operator.md index b129552899a..2e3cde06bfd 100644 --- a/docs-2.0/nebula-operator/6.get-started-with-operator.md +++ b/docs-2.0/nebula-operator/6.get-started-with-operator.md @@ -1,10 +1,10 @@ # Overview of using Nebula Operator -To use Nebula Operator to connect to Nebula Graph databases, see steps as follows: +To use Nebula Operator to connect to NebulaGraph databases, see steps as follows: 1. [Install Nebula Operator](2.deploy-nebula-operator.md). -2. Create a Nebula Graph cluster. +2. Create a NebulaGraph cluster. - For more information, see [Deploy Nebula Graph clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy Nebula Graph clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). + For more information, see [Deploy NebulaGraph clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). -3. [Connect to a Nebula Graph database](4.connect-to-nebula-graph-service.md). +3. [Connect to a NebulaGraph database](4.connect-to-nebula-graph-service.md). diff --git a/docs-2.0/nebula-operator/7.operator-faq.md b/docs-2.0/nebula-operator/7.operator-faq.md index f1fe53f6868..a0e120b49e7 100644 --- a/docs-2.0/nebula-operator/7.operator-faq.md +++ b/docs-2.0/nebula-operator/7.operator-faq.md @@ -1,10 +1,10 @@ # FAQ -## Does Nebula Operator support the v1.x version of Nebula Graph? +## Does Nebula Operator support the v1.x version of NebulaGraph? -No, because the v1.x version of Nebula Graph does not support DNS, and Nebula Operator requires the use of DNS. +No, because the v1.x version of NebulaGraph does not support DNS, and Nebula Operator requires the use of DNS. -## Does Nebula Operator support the rolling upgrade feature for Nebula Graph clusters? +## Does Nebula Operator support the rolling upgrade feature for NebulaGraph clusters? Nebula Operator currently supports cluster upgrading from version 2.5.x to version 2.6.x. diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md index 89575a801c9..8bfdce3149a 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md @@ -1,6 +1,6 @@ -# Customize configuration parameters for a Nebula Graph cluster +# Customize configuration parameters for a NebulaGraph cluster -Meta, Storage, and Graph services in a Nebula Cluster have their configurations, which are defined as `config` in the YAML file of the CR instance (Nebula Graph cluster) you created. The settings in `config` are mapped and loaded into the ConfigMap of the corresponding service in Kubernetes. +Meta, Storage, and Graph services in a Nebula Cluster have their configurations, which are defined as `config` in the YAML file of the CR instance (NebulaGraph cluster) you created. The settings in `config` are mapped and loaded into the ConfigMap of the corresponding service in Kubernetes. !!! note @@ -13,12 +13,12 @@ Config map[string]string `json:"config,omitempty"` ``` ## Prerequisites -You have created a Nebula Graph cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). +You have created a NebulaGraph cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). ## Steps -The following example uses a cluster named `nebula` and the cluster's configuration file named `nebula_cluster.yaml` to show how to set `config` for the Graph service in a Nebula Graph cluster. +The following example uses a cluster named `nebula` and the cluster's configuration file named `nebula_cluster.yaml` to show how to set `config` for the Graph service in a NebulaGraph cluster. 1. Run the following command to access the edit page of the `nebula` cluster. diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md index d04b042efa4..be1075648ad 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md @@ -1,6 +1,6 @@ # Reclaim PVs -Nebula Operator uses PVs (Persistent Volumes) and PVCs (Persistent Volume Claims) to store persistent data. If you accidentally deletes a Nebula Graph cluster, PV and PVC objects and the relevant data will be retained to ensure data security. +Nebula Operator uses PVs (Persistent Volumes) and PVCs (Persistent Volume Claims) to store persistent data. If you accidentally deletes a NebulaGraph cluster, PV and PVC objects and the relevant data will be retained to ensure data security. You can define whether to reclaim PVs or not in the configuration file of the cluster's CR instance with the parameter `enablePVReclaim`. diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md index 326d0d2cb27..ed79d0c4dab 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md @@ -2,17 +2,17 @@ !!! enterpriseonly - This feature is for Nebula Graph Enterprise Edition only. + This feature is for NebulaGraph Enterprise Edition only. After the Storage service is scaled out, you can decide whether to balance the data in the Storage service. -The scaling out of the Nebula Graph's Storage service is divided into two stages. In the first stage, the status of all pods is changed to `Ready`. In the second stage, the commands of `BALANCE DATA` and `BALANCE LEADER` are executed to balance data. These two stages decouple the scaling out process of the controller replica from the balancing data process, so that you can choose to perform the data balancing operation during low traffic period. The decoupling of the scaling out process from the balancing process can effectively reduce the impact on online services during data migration. +The scaling out of the NebulaGraph's Storage service is divided into two stages. In the first stage, the status of all pods is changed to `Ready`. In the second stage, the commands of `BALANCE DATA` and `BALANCE LEADER` are executed to balance data. These two stages decouple the scaling out process of the controller replica from the balancing data process, so that you can choose to perform the data balancing operation during low traffic period. The decoupling of the scaling out process from the balancing process can effectively reduce the impact on online services during data migration. You can define whether to balance data automatically or not with the parameter `enableAutoBalance` in the configuration file of the CR instance of the cluster you created. ## Prerequisites -You have created a Nebula Graph cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). +You have created a NebulaGraph cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). ## Steps diff --git a/docs-2.0/nebula-operator/9.upgrade-nebula-cluster.md b/docs-2.0/nebula-operator/9.upgrade-nebula-cluster.md index 6d5aa0bc29b..948223968a1 100644 --- a/docs-2.0/nebula-operator/9.upgrade-nebula-cluster.md +++ b/docs-2.0/nebula-operator/9.upgrade-nebula-cluster.md @@ -1,24 +1,24 @@ -# Upgrade Nebula Graph clusters created with Nebula Operator +# Upgrade NebulaGraph clusters created with Nebula Operator -This topic introduces how to upgrade a Nebula Graph cluster created with Nebula Operator. +This topic introduces how to upgrade a NebulaGraph cluster created with Nebula Operator. !!! Compatibility "Legacy version compatibility" - The 1.x version Nebula Operator is not compatible with Nebula Graph of version below v3.x. + The 1.x version Nebula Operator is not compatible with NebulaGraph of version below v3.x. ## Limits -- Only for Nebula Graph clusters that have been created with Nebula Operator. +- Only for NebulaGraph clusters that have been created with Nebula Operator. -- Only support upgrading the Nebula Graph version from {{operator.upgrade_from}} to {{operator.upgrade_to}}. +- Only support upgrading the NebulaGraph version from {{operator.upgrade_from}} to {{operator.upgrade_to}}. -## Upgrade a Nebula Graph cluster with Kubectl +## Upgrade a NebulaGraph cluster with Kubectl ### Prerequisites -You have created a Nebula Graph cluster with Kubectl. For details, see [Create a Nebula Graph cluster with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). +You have created a NebulaGraph cluster with Kubectl. For details, see [Create a NebulaGraph cluster with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). -The version of the Nebula Graph cluster to be upgraded in this topic is `{{operator.upgrade_from}}`, and its YAML file name is `apps_v1alpha1_nebulacluster.yaml`. +The version of the NebulaGraph cluster to be upgraded in this topic is `{{operator.upgrade_from}}`, and its YAML file name is `apps_v1alpha1_nebulacluster.yaml`. ### Steps @@ -135,11 +135,11 @@ The version of the Nebula Graph cluster to be upgraded in this topic is `{{opera 3 vesoft/nebula-storaged:{{nebula.tag}} ``` -## Upgrade a Nebula Graph cluster with Helm +## Upgrade a NebulaGraph cluster with Helm ### Prerequisites -You have created a Nebula Graph cluster with Helm. For details, see [Create a Nebula Graph cluster with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). +You have created a NebulaGraph cluster with Helm. For details, see [Create a NebulaGraph cluster with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). ### Steps @@ -152,11 +152,11 @@ You have created a Nebula Graph cluster with Helm. For details, see [Create a Ne 2. Set environment variables to your desired values. ```bash - export NEBULA_CLUSTER_NAME=nebula # The desired Nebula Graph cluster name. - export NEBULA_CLUSTER_NAMESPACE=nebula # The desired namespace where your Nebula Graph cluster locates. + export NEBULA_CLUSTER_NAME=nebula # The desired NebulaGraph cluster name. + export NEBULA_CLUSTER_NAMESPACE=nebula # The desired namespace where your NebulaGraph cluster locates. ``` -3. Upgrade a Nebula Graph cluster. +3. Upgrade a NebulaGraph cluster. For example, upgrade a cluster to {{nebula.tag}}. diff --git a/docs-2.0/nebula-spark-connector.md b/docs-2.0/nebula-spark-connector.md index 769cb7e0930..fd58628f5ef 100644 --- a/docs-2.0/nebula-spark-connector.md +++ b/docs-2.0/nebula-spark-connector.md @@ -1,14 +1,14 @@ # Nebula Spark Connector -Nebula Spark Connector is a Spark connector application for reading and writing Nebula Graph data in Spark standard format. Nebula Spark Connector consists of two parts: Reader and Writer. +Nebula Spark Connector is a Spark connector application for reading and writing NebulaGraph data in Spark standard format. Nebula Spark Connector consists of two parts: Reader and Writer. * Reader - Provides a Spark SQL interface. This interface can be used to read Nebula Graph data. It reads one vertex or edge type data at a time and assemble the result into a Spark DataFrame. + Provides a Spark SQL interface. This interface can be used to read NebulaGraph data. It reads one vertex or edge type data at a time and assemble the result into a Spark DataFrame. * Writer - Provides a Spark SQL interface. This interface can be used to write DataFrames into Nebula Graph in a row-by-row or batch-import way. + Provides a Spark SQL interface. This interface can be used to write DataFrames into NebulaGraph in a row-by-row or batch-import way. For more information, see [Nebula Spark Connector](https://github.com/vesoft-inc/nebula-spark-connector/blob/{{sparkconnector.branch}}/README_CN.md). @@ -16,11 +16,11 @@ For more information, see [Nebula Spark Connector](https://github.com/vesoft-inc Nebula Spark Connector applies to the following scenarios: -* Migrate data between different Nebula Graph clusters. +* Migrate data between different NebulaGraph clusters. -* Migrate data between different graph spaces in the same Nebula Graph cluster. +* Migrate data between different graph spaces in the same NebulaGraph cluster. -* Migrate data between Nebula Graph and other data sources. +* Migrate data between NebulaGraph and other data sources. * Graph computing with [Nebula Algorithm](nebula-algorithm.md). @@ -34,9 +34,9 @@ The features of Nebula Spark Connector {{sparkconnector.release}} are as follows * Supports non-attribute reading and full attribute reading. -* Supports reading Nebula Graph data into VertexRDD and EdgeRDD, and supports non-Long vertex IDs. +* Supports reading NebulaGraph data into VertexRDD and EdgeRDD, and supports non-Long vertex IDs. -* Unifies the extended data source of SparkSQL, and uses DataSourceV2 to extend Nebula Graph data. +* Unifies the extended data source of SparkSQL, and uses DataSourceV2 to extend NebulaGraph data. * Three write modes, `insert`, `update` and `delete`, are supported. `insert` mode will insert (overwrite) data, `update` mode will only update existing data, and `delete` mode will only delete data. @@ -78,21 +78,21 @@ After compilation, a similar file `nebula-spark-connector-{{sparkconnector.relea ## How to use -When using Nebula Spark Connector to reading and writing Nebula Graph data, You can refer to the following code. +When using Nebula Spark Connector to reading and writing NebulaGraph data, You can refer to the following code. ```scala -# Read vertex and edge data from Nebula Graph. +# Read vertex and edge data from NebulaGraph. spark.read.nebula().loadVerticesToDF() spark.read.nebula().loadEdgesToDF() -# Write dataframe data into Nebula Graph as vertex and edges. +# Write dataframe data into NebulaGraph as vertex and edges. dataframe.write.nebula().writeVertices() dataframe.write.nebula().writeEdges() ``` `nebula()` receives two configuration parameters, including connection configuration and read-write configuration. -### Reading data from Nebula Graph +### Reading data from NebulaGraph ```scala val config = NebulaConnectionConfig @@ -126,31 +126,31 @@ val nebulaReadEdgeConfig: ReadNebulaConfig = ReadNebulaConfig val edge = spark.read.nebula(config, nebulaReadEdgeConfig).loadEdgesToDF() ``` -- `NebulaConnectionConfig` is the configuration for connecting to the nebula graph, as described below. +- `NebulaConnectionConfig` is the configuration for connecting to the NebulaGraph, as described below. |Parameter|Required|Description| |:---|:---|:---| |`withMetaAddress` |Yes| Specifies the IP addresses and ports of all Meta Services. Separate multiple addresses with commas. The format is `ip1:port1,ip2:port2,...`. Read data is no need to configure `withGraphAddress`. | - |`withConnectionRetry` |No| The number of retries that the Nebula Java Client connected to the Nebula Graph. The default value is `1`. | + |`withConnectionRetry` |No| The number of retries that the Nebula Java Client connected to the NebulaGraph. The default value is `1`. | |`withExecuteRetry` |No| The number of retries that the Nebula Java Client executed query statements. The default value is `1`. | |`withTimeout` |No| The timeout for the Nebula Java Client request response. The default value is `6000`, Unit: ms. | -- `ReadNebulaConfig` is the configuration to read Nebula Graph data, as described below. +- `ReadNebulaConfig` is the configuration to read NebulaGraph data, as described below. |Parameter|Required|Description| |:---|:---|:---| - |`withSpace` |Yes| Nebula Graph space name. | - |`withLabel` |Yes| The Tag or Edge type name within the Nebula Graph space. | + |`withSpace` |Yes| NebulaGraph space name. | + |`withLabel` |Yes| The Tag or Edge type name within the NebulaGraph space. | |`withNoColumn` |No| Whether the property is not read. The default value is `false`, read property. If the value is `true`, the property is not read, the `withReturnCols` configuration is invalid. | |`withReturnCols` |No| Configures the set of properties for vertex or edges to read. the format is `List(property1,property2,...)`, The default value is `List()`, indicating that all properties are read. | |`withLimit` |No| Configure the number of rows of data read from the server by the Nebula Java Storage Client at a time. The default value is `1000`. | - |`withPartitionNum` |No| Configures the number of Spark partitions to read the Nebula Graph data. The default value is `100`. This value should not exceed the number of slices in the graph space (partition_num).| + |`withPartitionNum` |No| Configures the number of Spark partitions to read the NebulaGraph data. The default value is `100`. This value should not exceed the number of slices in the graph space (partition_num).| -### Write data into Nebula Graph +### Write data into NebulaGraph !!! note - The values of columns in a dataframe are automatically written to the Nebula Graph as property values. + The values of columns in a dataframe are automatically written to the NebulaGraph as property values. ```scala val config = NebulaConnectionConfig @@ -212,25 +212,25 @@ val nebulaWriteVertexConfig = WriteNebulaVertexConfig df.write.nebula(config, nebulaWriteVertexConfig).writeVertices() ``` -- `NebulaConnectionConfig` is the configuration for connecting to the nebula graph, as described below. +- `NebulaConnectionConfig` is the configuration for connecting to the NebulaGraph, as described below. |Parameter|Required|Description| |:---|:---|:---| |`withMetaAddress` |Yes| Specifies the IP addresses and ports of all Meta Services. Separate multiple addresses with commas. The format is `ip1:port1,ip2:port2,...`. | |`withGraphAddress` |Yes| Specifies the IP addresses and ports of Graph Services. Separate multiple addresses with commas. The format is `ip1:port1,ip2:port2,...`. | - |`withConnectionRetry` |No| Number of retries that the Nebula Java Client connected to the Nebula Graph. The default value is `1`. | + |`withConnectionRetry` |No| Number of retries that the Nebula Java Client connected to the NebulaGraph. The default value is `1`. | - `WriteNebulaVertexConfig` is the configuration of the write vertex, as described below. |Parameter|Required|Description| |:---|:---|:---| - |`withSpace` |Yes| Nebula Graph space name. | + |`withSpace` |Yes| NebulaGraph space name. | |`withTag` |Yes| The Tag name that needs to be associated when a vertex is written. | |`withVidField` |Yes| The column in the DataFrame as the vertex ID. | - |`withVidPolicy` |No| When writing the vertex ID, Nebula Graph use mapping function, supports HASH only. No mapping is performed by default. | + |`withVidPolicy` |No| When writing the vertex ID, NebulaGraph use mapping function, supports HASH only. No mapping is performed by default. | |`withVidAsProp` |No| Whether the column in the DataFrame that is the vertex ID is also written as an property. The default value is `false`. If set to `true`, make sure the Tag has the same property name as `VidField`. | - |`withUser` |No| Nebula Graph user name. If [authentication](7.data-security/1.authentication/1.authentication.md) is disabled, you do not need to configure the user name and password. | - |`withPasswd` |No| The password for the Nebula Graph user name. | + |`withUser` |No| NebulaGraph user name. If [authentication](7.data-security/1.authentication/1.authentication.md) is disabled, you do not need to configure the user name and password. | + |`withPasswd` |No| The password for the NebulaGraph user name. | |`withBatch` |Yes| The number of rows of data written at a time. The default value is `1000`. | |`withWriteMode`|No|Write mode. The optional values are `insert` and `update`. The default value is `insert`.| @@ -238,18 +238,18 @@ df.write.nebula(config, nebulaWriteVertexConfig).writeVertices() |Parameter|Required|Description| |:---|:---|:---| - |`withSpace` |Yes| Nebula Graph space name. | + |`withSpace` |Yes| NebulaGraph space name. | |`withEdge` |Yes| The Edge type name that needs to be associated when a edge is written. | |`withSrcIdField` |Yes| The column in the DataFrame as the vertex ID. | - |`withSrcPolicy` |No| When writing the starting vertex ID, Nebula Graph use mapping function, supports HASH only. No mapping is performed by default. | + |`withSrcPolicy` |No| When writing the starting vertex ID, NebulaGraph use mapping function, supports HASH only. No mapping is performed by default. | |`withDstIdField` |Yes| The column in the DataFrame that serves as the destination vertex. | - |`withDstPolicy` |No| When writing the destination vertex ID, Nebula Graph use mapping function, supports HASH only. No mapping is performed by default. | + |`withDstPolicy` |No| When writing the destination vertex ID, NebulaGraph use mapping function, supports HASH only. No mapping is performed by default. | |`withRankField` |No| The column in the DataFrame as the rank. Rank is not written by default. | |`withSrcAsProperty` |No| Whether the column in the DataFrame that is the starting vertex is also written as an property. The default value is `false`. If set to `true`, make sure Edge type has the same property name as `SrcIdField`. | |`withDstAsProperty` |No| Whether column that are destination vertex in the DataFrame are also written as property. The default value is `false`. If set to `true`, make sure Edge type has the same property name as `DstIdField`. | |`withRankAsProperty` |No| Whether column in the DataFrame that is the rank is also written as property.The default value is `false`. If set to `true`, make sure Edge type has the same property name as `RankField`. | - |`withUser` |No| Nebula Graph user name. If [authentication](7.data-security/1.authentication/1.authentication.md) is disabled, you do not need to configure the user name and password. | - |`withPasswd` |No| The password for the Nebula Graph user name. | + |`withUser` |No| NebulaGraph user name. If [authentication](7.data-security/1.authentication/1.authentication.md) is disabled, you do not need to configure the user name and password. | + |`withPasswd` |No| The password for the NebulaGraph user name. | |`withBatch` |Yes| The number of rows of data written at a time. The default value is `1000`. | |`withWriteMode`|No|Write mode. The optional values are `insert` and `update`. The default value is `insert`.| diff --git a/docs-2.0/nebula-studio/about-studio/st-ug-limitations.md b/docs-2.0/nebula-studio/about-studio/st-ug-limitations.md index 6e59f0f417d..0f805233185 100644 --- a/docs-2.0/nebula-studio/about-studio/st-ug-limitations.md +++ b/docs-2.0/nebula-studio/about-studio/st-ug-limitations.md @@ -2,13 +2,13 @@ This topic introduces the limitations of Studio. -## Nebula Graph versions +## NebulaGraph versions !!! Note - The Studio version is released independently of the Nebula Graph core. The correspondence between the versions of Studio and the Nebula Graph core, as shown in the table below. + The Studio version is released independently of the NebulaGraph core. The correspondence between the versions of Studio and the NebulaGraph core, as shown in the table below. -| Nebula Graph version | Studio version | +| NebulaGraph version | Studio version | | --- | --- | | 1.x | 1.x| | 2.0 & 2.0.1 | 2.x | @@ -25,11 +25,11 @@ For now, Studio v3.x supports x86_64 architecture only. ## Upload data diff --git a/docs-2.0/nebula-studio/about-studio/st-ug-what-is-graph-studio.md b/docs-2.0/nebula-studio/about-studio/st-ug-what-is-graph-studio.md index c8b6d8b3a83..19f03831d90 100644 --- a/docs-2.0/nebula-studio/about-studio/st-ug-what-is-graph-studio.md +++ b/docs-2.0/nebula-studio/about-studio/st-ug-what-is-graph-studio.md @@ -1,6 +1,6 @@ # What is Nebula Studio -Nebula Studio (Studio in short) is a browser-based visualization tool to manage Nebula Graph. It provides you with a graphical user interface to manipulate graph schemas, import data, and run nGQL statements to retrieve data. With Studio, you can quickly become a graph exploration expert from scratch. You can view the latest source code in the Nebula Graph GitHub repository, see [nebula-studio](https://github.com/vesoft-inc/nebula-studio) for details. +Nebula Studio (Studio in short) is a browser-based visualization tool to manage NebulaGraph. It provides you with a graphical user interface to manipulate graph schemas, import data, and run nGQL statements to retrieve data. With Studio, you can quickly become a graph exploration expert from scratch. You can view the latest source code in the NebulaGraph GitHub repository, see [nebula-studio](https://github.com/vesoft-inc/nebula-studio) for details. !!! Note @@ -10,20 +10,20 @@ Nebula Studio (Studio in short) is a browser-based visualization tool to manage You can deploy Studio using the following methods: -- You can deploy Studio with Docker, RPM-based, Tar-based or DEB-based and connect it to Nebula Graph. For more information, see [Deploy Studio](../deploy-connect/st-ug-deploy.md). -- Helm-based. You can deploy Studio with Helm in the Kubernetes cluster and connect it to Nebula Graph. For more information, see [Helm-based Studio](../deploy-connect/st-ug-deploy-by-helm.md). +- You can deploy Studio with Docker, RPM-based, Tar-based or DEB-based and connect it to NebulaGraph. For more information, see [Deploy Studio](../deploy-connect/st-ug-deploy.md). +- Helm-based. You can deploy Studio with Helm in the Kubernetes cluster and connect it to NebulaGraph. For more information, see [Helm-based Studio](../deploy-connect/st-ug-deploy-by-helm.md). The functions of the above four deployment methods are the same and may be restricted when using Studio. For more information, see [Limitations](../about-studio/st-ug-limitations.md). ## Features -Studio can easily manage Nebula Graph data, with the following functions: +Studio can easily manage NebulaGraph data, with the following functions: -- On the **Schema** page, you can use the graphical user interface to create the space, Tag, Edge Type, Index, and view the statistics on the graph. It helps you quickly get started with Nebula Graph. +- On the **Schema** page, you can use the graphical user interface to create the space, Tag, Edge Type, Index, and view the statistics on the graph. It helps you quickly get started with NebulaGraph. - On the **Import** page, you can operate batch import of vertex and edge data with clicks, and view a real-time import log. @@ -33,19 +33,19 @@ Studio can easily manage Nebula Graph data, with the following functions: You can use Studio in one of these scenarios: -- You have a dataset, and you want to explore and analyze data in a visualized way. You can use Docker Compose to deploy Nebula Graph and then use Studio to explore or analyze data in a visualized way. +- You have a dataset, and you want to explore and analyze data in a visualized way. You can use Docker Compose to deploy NebulaGraph and then use Studio to explore or analyze data in a visualized way. -- You are a beginner of nGQL (Nebula Graph Query Language) and you prefer to use a GUI rather than a command-line interface (CLI) to learn the language. +- You are a beginner of nGQL (NebulaGraph Query Language) and you prefer to use a GUI rather than a command-line interface (CLI) to learn the language. ## Authentication -Authentication is not enabled in Nebula Graph by default. Users can log into Studio with the `root` account and any password. +Authentication is not enabled in NebulaGraph by default. Users can log into Studio with the `root` account and any password. -When Nebula Graph enables authentication, users can only sign into Studio with the specified account. For more information, see [Authentication](../../7.data-security/1.authentication/1.authentication.md). +When NebulaGraph enables authentication, users can only sign into Studio with the specified account. For more information, see [Authentication](../../7.data-security/1.authentication/1.authentication.md). ## Check updates diff --git a/docs-2.0/nebula-studio/deploy-connect/st-ug-connect.md b/docs-2.0/nebula-studio/deploy-connect/st-ug-connect.md index e8ef48a3b75..7b27e913eb7 100644 --- a/docs-2.0/nebula-studio/deploy-connect/st-ug-connect.md +++ b/docs-2.0/nebula-studio/deploy-connect/st-ug-connect.md @@ -1,38 +1,38 @@ -# Connect to Nebula Graph +# Connect to NebulaGraph -After successfully launching Studio, you need to configure to connect to Nebula Graph. This topic describes how Studio connects to the Nebula Graph database. +After successfully launching Studio, you need to configure to connect to NebulaGraph. This topic describes how Studio connects to the NebulaGraph database. ## Prerequisites -Before connecting to the Nebula Graph database, you need to confirm the following information: +Before connecting to the NebulaGraph database, you need to confirm the following information: -- The Nebula Graph services and Studio are started. For more information, see [Deploy Studio](st-ug-deploy.md). +- The NebulaGraph services and Studio are started. For more information, see [Deploy Studio](st-ug-deploy.md). -- You have the local IP address and the port used by the Graph service of Nebula Graph. The default port is `9669`. +- You have the local IP address and the port used by the Graph service of NebulaGraph. The default port is `9669`. !!! note Run `ifconfig` or `ipconfig` on the machine to get the IP address. -- You have a Nebula Graph account and its password. +- You have a NebulaGraph account and its password. !!! note - If authentication is enabled in Nebula Graph and different role-based accounts are created, you must use the assigned account to connect to Nebula Graph. If authentication is disabled, you can use the `root` and any password to connect to Nebula Graph. For more information, see [Nebula Graph Database Manual](https://docs.nebula-graph.io/). + If authentication is enabled in NebulaGraph and different role-based accounts are created, you must use the assigned account to connect to NebulaGraph. If authentication is disabled, you can use the `root` and any password to connect to NebulaGraph. For more information, see [NebulaGraph Database Manual](https://docs.nebula-graph.io/). ## Procedure -To connect Studio to Nebula Graph, follow these steps: +To connect Studio to NebulaGraph, follow these steps: 1. On the **Config Server** page of Studio, configure these fields: - - **Host**: Enter the IP address and the port of the Graph service of Nebula Graph. The valid format is `IP:port`. The default port is `9669`. + - **Host**: Enter the IP address and the port of the Graph service of NebulaGraph. The valid format is `IP:port`. The default port is `9669`. !!! note - When Nebula Graph and Studio are deployed on the same machine, you must enter the IP address of the machine, but not `127.0.0.1` or `localhost`, in the **Host** field. + When NebulaGraph and Studio are deployed on the same machine, you must enter the IP address of the machine, but not `127.0.0.1` or `localhost`, in the **Host** field. - - **Username** and **Password**: Fill in the log in account according to the authentication settings of Nebula Graph. + - **Username** and **Password**: Fill in the log in account according to the authentication settings of NebulaGraph. - If authentication is not enabled, you can use `root` and any password as the username and its password. @@ -44,15 +44,15 @@ To connect Studio to Nebula Graph, follow these steps: 2. After the configuration, click the **Connect** button. - If you can see the **Explore** page, Studio is successfully connected to Nebula Graph. + If you can see the **Explore** page, Studio is successfully connected to NebulaGraph. - ![The Console page shows that the connection is successful](https://docs-cdn.nebula-graph.com.cn/figures/st-ug-003-en.png "Nebula Graph is connected") + ![The Console page shows that the connection is successful](https://docs-cdn.nebula-graph.com.cn/figures/st-ug-003-en.png "NebulaGraph is connected") -One session continues for up to 30 minutes. If you do not operate Studio within 30 minutes, the active session will time out and you must connect to Nebula Graph again. +One session continues for up to 30 minutes. If you do not operate Studio within 30 minutes, the active session will time out and you must connect to NebulaGraph again. ## Next to do -When Studio is successfully connected to Nebula Graph, you can do these operations: +When Studio is successfully connected to NebulaGraph, you can do these operations: - If your account has GOD or ADMIN privilege, you can create a schema on the **[Console](../quick-start/st-ug-create-schema.md)** page or on the **[Schema](../manage-schema/st-ug-crud-space.md)** page, batch import data on the **[Import](../quick-start/st-ug-import-data.md)** page, and execute nGQL statements on the **Console** page. @@ -62,10 +62,10 @@ When Studio is successfully connected to Nebula Graph, you can do these operatio ### Log out -If you want to reset Nebula Graph, you can log out and reconfigure the database. +If you want to reset NebulaGraph, you can log out and reconfigure the database. -When the Studio is still connected to a Nebula Graph database, you can click the user profile picture in the upper right corner, and choose **Log out**. If the **Config Server** page is displayed on the browser, it means that Studio has successfully disconnected from the Nebula Graph database. +When the Studio is still connected to a NebulaGraph database, you can click the user profile picture in the upper right corner, and choose **Log out**. If the **Config Server** page is displayed on the browser, it means that Studio has successfully disconnected from the NebulaGraph database. ![reset](https://docs-cdn.nebula-graph.com.cn/figures/st-ug-000-en.png) \ No newline at end of file diff --git a/docs-2.0/nebula-studio/deploy-connect/st-ug-deploy-by-helm.md b/docs-2.0/nebula-studio/deploy-connect/st-ug-deploy-by-helm.md index fda0a2f2f21..560fa3bb64b 100644 --- a/docs-2.0/nebula-studio/deploy-connect/st-ug-deploy-by-helm.md +++ b/docs-2.0/nebula-studio/deploy-connect/st-ug-deploy-by-helm.md @@ -46,7 +46,7 @@ Before installing Studio, you need to install the following software and ensure ## Next to do -On the **Config Server** page, connect Docker-based Studio to Nebula Graph. For more information, see [Connect to Nebula Graph](st-ug-connect.md). +On the **Config Server** page, connect Docker-based Studio to NebulaGraph. For more information, see [Connect to NebulaGraph](st-ug-connect.md). ## Configuration diff --git a/docs-2.0/nebula-studio/deploy-connect/st-ug-deploy.md b/docs-2.0/nebula-studio/deploy-connect/st-ug-deploy.md index 140ef8c26b8..0ce0e5dbc5e 100644 --- a/docs-2.0/nebula-studio/deploy-connect/st-ug-deploy.md +++ b/docs-2.0/nebula-studio/deploy-connect/st-ug-deploy.md @@ -1,6 +1,6 @@ # Deploy Studio This topic describes how to deploy Studio locally by RPM, DEB, tar package and Docker. @@ -11,7 +11,7 @@ This topic describes how to deploy Studio locally by RPM, DEB, tar package and D Before you deploy RPM-based Studio, you must confirm that: -- The Nebula Graph services are deployed and started. For more information, see [Nebula Graph Database Manual](../../2.quick-start/1.quick-start-workflow.md). +- The NebulaGraph services are deployed and started. For more information, see [NebulaGraph Database Manual](../../2.quick-start/1.quick-start-workflow.md). - The Linux distribution is CentOS, install `lsof`. @@ -114,7 +114,7 @@ $ systemctl restart nebula-graph-studio.service Before you deploy DEB-based Studio, you must do a check of these: -- The Nebula Graph services are deployed and started. For more information, see [Nebula Graph Database Manual](../../2.quick-start/1.quick-start-workflow.md). +- The NebulaGraph services are deployed and started. For more information, see [NebulaGraph Database Manual](../../2.quick-start/1.quick-start-workflow.md). - The Linux distribution is Ubuntu. @@ -163,7 +163,7 @@ $ sudo dpkg -r nebula-graph-studio Before you deploy tar-based Studio, you must do a check of these: -- The Nebula Graph services are deployed and started. For more information, see [Nebula Graph Database Manual](../../2.quick-start/1.quick-start-workflow.md). +- The NebulaGraph services are deployed and started. For more information, see [NebulaGraph Database Manual](../../2.quick-start/1.quick-start-workflow.md). - Before the installation starts, the following ports are not occupied. @@ -211,7 +211,7 @@ $ kill $(lsof -t -i :7001) #stop nebula-graph-studio Before you deploy Docker-based Studio, you must do a check of these: -- The Nebula Graph services are deployed and started. For more information, see [Nebula Graph Database Manual](../../2.quick-start/1.quick-start-workflow.md). +- The NebulaGraph services are deployed and started. For more information, see [NebulaGraph Database Manual](../../2.quick-start/1.quick-start-workflow.md). - On the machine where Studio will run, Docker Compose is installed and started. For more information, see [Docker Compose Documentation](https://docs.docker.com/compose/install/ "Click to go to Docker Documentation"). @@ -224,11 +224,11 @@ Before you deploy Docker-based Studio, you must do a check of these: ### Procedure -To deploy and start Docker-based Studio, run the following commands. Here we use Nebula Graph v{{nebula.release}} for demonstration: +To deploy and start Docker-based Studio, run the following commands. Here we use NebulaGraph v{{nebula.release}} for demonstration: 1. Download the configuration files for the deployment. - | Installation package | Nebula Graph version | + | Installation package | NebulaGraph version | | ----- | ----- | | [nebula-graph-studio-{{studio.release}}.tar.gz](https://oss-cdn.nebula-graph.io/nebula-graph-studio/{{studio.release}}/nebula-graph-studio-{{studio.release}}.tar.gz) | {{nebula.release}} | @@ -273,4 +273,4 @@ To deploy and start Docker-based Studio, run the following commands. Here we use ## Next to do -On the **Config Server** page, connect Docker-based Studio to Nebula Graph. For more information, see [Connect to Nebula Graph](st-ug-connect.md). +On the **Config Server** page, connect Docker-based Studio to NebulaGraph. For more information, see [Connect to NebulaGraph](st-ug-connect.md). diff --git a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-edge-type.md b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-edge-type.md index 67bad7851bf..fddbe5cc982 100644 --- a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-edge-type.md +++ b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-edge-type.md @@ -1,12 +1,12 @@ # Operate edge types -After a graph space is created in Nebula Graph, you can create edge types. With Studio, you can choose to use the **Console** page or the **Schema** page to create, retrieve, update, or delete edge types. This topic introduces how to use the **Schema** page to operate edge types in a graph space only. +After a graph space is created in NebulaGraph, you can create edge types. With Studio, you can choose to use the **Console** page or the **Schema** page to create, retrieve, update, or delete edge types. This topic introduces how to use the **Schema** page to operate edge types in a graph space only. ## Prerequisites To operate an edge type on the **Schema** page of Studio, you must do a check of these: -- Studio is connected to Nebula Graph. +- Studio is connected to NebulaGraph. - A graph space is created. - Your account has the authority of GOD, ADMIN, or DBA. @@ -38,7 +38,7 @@ To operate an edge type on the **Schema** page of Studio, you must do a check of - (Optional) Enter the description. - - **Set TTL (Time To Live)** (Optional): If no index is set for the edge type, you can set the TTL configuration: In the upper left corner of the **Set TTL** panel, click the check box to expand the panel, and configure `TTL_COL` and `TTL_ DURATION` (in seconds). For more information about both parameters, see [TTL configuration](../../3.ngql-guide/8.clauses-and-options/ttl-options.md "Click to go to Nebula Graph website"). + - **Set TTL (Time To Live)** (Optional): If no index is set for the edge type, you can set the TTL configuration: In the upper left corner of the **Set TTL** panel, click the check box to expand the panel, and configure `TTL_COL` and `TTL_ DURATION` (in seconds). For more information about both parameters, see [TTL configuration](../../3.ngql-guide/8.clauses-and-options/ttl-options.md "Click to go to NebulaGraph website"). 6. When the preceding settings are completed, in the **Equivalent to the following nGQL statement** panel, you can see the nGQL statement equivalent to these settings. diff --git a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-index.md b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-index.md index 7fb649625d4..a4ca58a9c60 100644 --- a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-index.md +++ b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-index.md @@ -4,13 +4,13 @@ You can create an index for a Tag and/or an Edge type. An index lets traversal s !!! note - You can create an index when a Tag or an Edge Type is created. But an index can decrease the write speed during data import. We recommend that you import data firstly and then create and rebuild an index. For more information, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md "Click to go to the Nebula Graph website"). + You can create an index when a Tag or an Edge Type is created. But an index can decrease the write speed during data import. We recommend that you import data firstly and then create and rebuild an index. For more information, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md "Click to go to the NebulaGraph website"). ## Prerequisites To operate an index on the **Schema** page of Studio, you must do a check of these: -- Studio is connected to Nebula Graph. +- Studio is connected to NebulaGraph. - A graph Space, Tags, and Edge Types are created. - Your account has the authority of GOD, ADMIN, or DBA. diff --git a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-space.md b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-space.md index c18478436cf..17ec57db191 100644 --- a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-space.md +++ b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-space.md @@ -1,15 +1,15 @@ # Operate graph spaces -When Studio is connected to Nebula Graph, you can create or delete a graph space. You can use the **Console** page or the **Schema** page to do these operations. This article only introduces how to use the **Schema** page to operate graph spaces in Nebula Graph. +When Studio is connected to NebulaGraph, you can create or delete a graph space. You can use the **Console** page or the **Schema** page to do these operations. This article only introduces how to use the **Schema** page to operate graph spaces in NebulaGraph. ## Prerequisites To operate a graph space on the **Schema** page of Studio, you must do a check of these: -- Studio is connected to Nebula Graph. +- Studio is connected to NebulaGraph. - Your account has the authority of GOD. It means that: - - If the authentication is enabled in Nebula Graph, you can use `root` and any password to sign in to Studio. - - If the authentication is disabled in Nebula Graph, you must use `root` and its password to sign in to Studio. + - If the authentication is enabled in NebulaGraph, you can use `root` and any password to sign in to Studio. + - If the authentication is disabled in NebulaGraph, you must use `root` and its password to sign in to Studio. ## Create a graph space @@ -23,7 +23,7 @@ To operate a graph space on the **Schema** page of Studio, you must do a check o - **Comment**: Enter the description for graph space. The maximum length is 256 bytes. By default, there will be no comments on a space. But in this example, `Statistics of basketball players` is used. - - **Optional Parameters**: Set the values of `partition_num` and `replica_factor` respectively. In this example, these parameters are set to `100` and `1` respectively. For more information, see [`CREATE SPACE` syntax](../../3.ngql-guide/9.space-statements/1.create-space.md "Click to go to the Nebula Graph website"). + - **Optional Parameters**: Set the values of `partition_num` and `replica_factor` respectively. In this example, these parameters are set to `100` and `1` respectively. For more information, see [`CREATE SPACE` syntax](../../3.ngql-guide/9.space-statements/1.create-space.md "Click to go to the NebulaGraph website"). In the **Equivalent to the following nGQL statement** panel, you can see the statement equivalent to the preceding settings. diff --git a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-tag.md b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-tag.md index 64d7f45bd20..ac65573ec0a 100644 --- a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-tag.md +++ b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-tag.md @@ -1,12 +1,12 @@ # Operate tags -After a graph space is created in Nebula Graph, you can create tags. With Studio, you can use the **Console** page or the **Schema** page to create, retrieve, update, or delete tags. This topic introduces how to use the **Schema** page to operate tags in a graph space only. +After a graph space is created in NebulaGraph, you can create tags. With Studio, you can use the **Console** page or the **Schema** page to create, retrieve, update, or delete tags. This topic introduces how to use the **Schema** page to operate tags in a graph space only. ## Prerequisites To operate a tag on the **Schema** page of Studio, you must do a check of these: -- Studio is connected to Nebula Graph. +- Studio is connected to NebulaGraph. - A graph space is created. - Your account has the authority of GOD, ADMIN, or DBA. @@ -38,7 +38,7 @@ To operate a tag on the **Schema** page of Studio, you must do a check of these: - (Optional) Enter the description. - - **Set TTL (Time To Live)** (Optional): If no index is set for the tag, you can set the TTL configuration: In the upper left corner of the **Set TTL** panel, click the check box to expand the panel, and configure `TTL_COL` and `TTL_ DURATION` (in seconds). For more information about both parameters, see [TTL configuration](../../3.ngql-guide/8.clauses-and-options/ttl-options.md "Click to go to Nebula Graph website"). + - **Set TTL (Time To Live)** (Optional): If no index is set for the tag, you can set the TTL configuration: In the upper left corner of the **Set TTL** panel, click the check box to expand the panel, and configure `TTL_COL` and `TTL_ DURATION` (in seconds). For more information about both parameters, see [TTL configuration](../../3.ngql-guide/8.clauses-and-options/ttl-options.md "Click to go to NebulaGraph website"). 6. When the preceding settings are completed, in the **Equivalent to the following nGQL statement** panel, you can see the nGQL statement equivalent to these settings. diff --git a/docs-2.0/nebula-studio/quick-start/st-ug-create-schema.md b/docs-2.0/nebula-studio/quick-start/st-ug-create-schema.md index 934744ce212..e773f4c2bdb 100644 --- a/docs-2.0/nebula-studio/quick-start/st-ug-create-schema.md +++ b/docs-2.0/nebula-studio/quick-start/st-ug-create-schema.md @@ -1,16 +1,16 @@ # Create a schema -To batch import data into Nebula Graph, you must have a graph schema. You can create a schema on the **Console** page or on the **Schema** page of Studio. +To batch import data into NebulaGraph, you must have a graph schema. You can create a schema on the **Console** page or on the **Schema** page of Studio. !!! note - You can use nebula-console to create a schema. For more information, see [Nebula Graph Manual](../../README.md) and [Get started with Nebula Graph](../../2.quick-start/1.quick-start-workflow.md). + You can use nebula-console to create a schema. For more information, see [NebulaGraph Manual](../../README.md) and [Get started with NebulaGraph](../../2.quick-start/1.quick-start-workflow.md). ## Prerequisites To create a graph schema on Studio, you must do a check of these: -- Studio is connected to Nebula Graph. +- Studio is connected to NebulaGraph. - Your account has the privilege of GOD, ADMIN, or DBA. diff --git a/docs-2.0/nebula-studio/quick-start/st-ug-import-data.md b/docs-2.0/nebula-studio/quick-start/st-ug-import-data.md index d93a60b4df6..fa05794f61b 100644 --- a/docs-2.0/nebula-studio/quick-start/st-ug-import-data.md +++ b/docs-2.0/nebula-studio/quick-start/st-ug-import-data.md @@ -1,12 +1,12 @@ # Import data -After CSV files of data and a schema are created, you can use the **Import** page to batch import vertex and edge data into Nebula Graph for graph exploration and data analysis. +After CSV files of data and a schema are created, you can use the **Import** page to batch import vertex and edge data into NebulaGraph for graph exploration and data analysis. ## Prerequisites To batch import data, do a check of these: -- Studio is connected to Nebula Graph. +- Studio is connected to NebulaGraph. - A schema is created. diff --git a/docs-2.0/nebula-studio/quick-start/st-ug-plan-schema.md b/docs-2.0/nebula-studio/quick-start/st-ug-plan-schema.md index caf78b48a3a..dffd1fb336f 100644 --- a/docs-2.0/nebula-studio/quick-start/st-ug-plan-schema.md +++ b/docs-2.0/nebula-studio/quick-start/st-ug-plan-schema.md @@ -1,8 +1,8 @@ # Design a schema -To manipulate graph data in Nebula Graph with Studio, you must have a graph schema. This article introduces how to design a graph schema for Nebula Graph. +To manipulate graph data in NebulaGraph with Studio, you must have a graph schema. This article introduces how to design a graph schema for NebulaGraph. -A graph schema for Nebula Graph must have these essential elements: +A graph schema for NebulaGraph must have these essential elements: - Tags (namely vertex types) and their properties. diff --git a/docs-2.0/nebula-studio/troubleshooting/st-ug-config-server-errors.md b/docs-2.0/nebula-studio/troubleshooting/st-ug-config-server-errors.md index c2cb06fd0db..7032d6f7f0e 100644 --- a/docs-2.0/nebula-studio/troubleshooting/st-ug-config-server-errors.md +++ b/docs-2.0/nebula-studio/troubleshooting/st-ug-config-server-errors.md @@ -10,7 +10,7 @@ You can troubleshoot the problem by following the steps below. ### Step1: Confirm that the format of the **Host** field is correct -You must fill in the IP address (`graph_server_ip`) and port of the Nebula Graph database Graph service. If no changes are made, the port defaults to `9669`. Even if Nebula Graph and Studio are deployed on the current machine, you must use the local IP address instead of `127.0.0.1`, `localhost` or `0.0.0.0`. +You must fill in the IP address (`graph_server_ip`) and port of the NebulaGraph database Graph service. If no changes are made, the port defaults to `9669`. Even if NebulaGraph and Studio are deployed on the current machine, you must use the local IP address instead of `127.0.0.1`, `localhost` or `0.0.0.0`. ### Step2: Confirm that the **username** and **password** are correct @@ -18,28 +18,28 @@ If authentication is not enabled, you can use root and any password as the usern If authentication is enabled and different users are created and assigned roles, users in different roles log in with their accounts and passwords. -### Step3: Confirm that Nebula Graph service is normal +### Step3: Confirm that NebulaGraph service is normal -Check Nebula Graph service status. Regarding the operation of viewing services: +Check NebulaGraph service status. Regarding the operation of viewing services: -- If you compile and deploy Nebula Graph on a Linux server, refer to the [Nebula Graph service](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md). +- If you compile and deploy NebulaGraph on a Linux server, refer to the [NebulaGraph service](../../4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md). -- If you use Nebula Graph deployed by Docker Compose and RPM, refer to the [Nebula Graph service status and ports](../deploy-connect/st-ug-deploy.md). +- If you use NebulaGraph deployed by Docker Compose and RPM, refer to the [NebulaGraph service status and ports](../deploy-connect/st-ug-deploy.md). -If the Nebula Graph service is normal, proceed to Step 4 to continue troubleshooting. Otherwise, please restart Nebula Graph service. +If the NebulaGraph service is normal, proceed to Step 4 to continue troubleshooting. Otherwise, please restart NebulaGraph service. !!! Note - If you used `docker-compose up -d` to satrt Nebula Graph before, you must run the `docker-compose down` to stop Nebula Graph. + If you used `docker-compose up -d` to satrt NebulaGraph before, you must run the `docker-compose down` to stop NebulaGraph. ### Step4: Confirm the network connection of the Graph service is normal -Run a command (for example, telnet 9669) on the Studio machine to confirm whether Nebula Graph's Graph service network connection is normal. +Run a command (for example, telnet 9669) on the Studio machine to confirm whether NebulaGraph's Graph service network connection is normal. If the connection fails, check according to the following steps: -- If Studio and Nebula Graph are on the same machine, check if the port is exposed. +- If Studio and NebulaGraph are on the same machine, check if the port is exposed. -- If Studio and Nebula Graph are not on the same machine, check the network configuration of the Nebula Graph server, such as firewall, gateway, and port. +- If Studio and NebulaGraph are not on the same machine, check the network configuration of the NebulaGraph server, such as firewall, gateway, and port. -If you cannot connect to the Nebula Graph service after troubleshooting with the above steps, please go to the [Nebula Graph forum](https://discuss.nebula-graph.io) for consultation. \ No newline at end of file +If you cannot connect to the NebulaGraph service after troubleshooting with the above steps, please go to the [NebulaGraph forum](https://discuss.nebula-graph.io) for consultation. \ No newline at end of file diff --git a/docs-2.0/nebula-studio/troubleshooting/st-ug-connection-errors.md b/docs-2.0/nebula-studio/troubleshooting/st-ug-connection-errors.md index dc141b76f2b..bb24e42ab01 100644 --- a/docs-2.0/nebula-studio/troubleshooting/st-ug-connection-errors.md +++ b/docs-2.0/nebula-studio/troubleshooting/st-ug-connection-errors.md @@ -36,7 +36,7 @@ If the above result is not returned, stop Studio and restart it first. For detai !!! note - If you used `docker-compose up -d` to satrt Nebula Graph before, you must run the `docker-compose down` to stop Nebula Graph. + If you used `docker-compose up -d` to satrt NebulaGraph before, you must run the `docker-compose down` to stop NebulaGraph. ### Step3: Confirm address @@ -52,8 +52,8 @@ If the connection is refused, check according to the following steps: If the connection fails, check according to the following steps: -- If Studio and Nebula Graph are on the same machine, check if the port is exposed. +- If Studio and NebulaGraph are on the same machine, check if the port is exposed. -- If Studio and Nebula Graph are not on the same machine, check the network configuration of the Nebula Graph server, such as firewall, gateway, and port. +- If Studio and NebulaGraph are not on the same machine, check the network configuration of the NebulaGraph server, such as firewall, gateway, and port. -If you cannot connect to the Nebula Graph service after troubleshooting with the above steps, please go to the [Nebula Graph forum](https://discuss.nebula-graph.io) for consultation. \ No newline at end of file +If you cannot connect to the NebulaGraph service after troubleshooting with the above steps, please go to the [NebulaGraph forum](https://discuss.nebula-graph.io) for consultation. \ No newline at end of file diff --git a/docs-2.0/nebula-studio/troubleshooting/st-ug-faq.md b/docs-2.0/nebula-studio/troubleshooting/st-ug-faq.md index c8404ec02bc..bc27bab84b6 100644 --- a/docs-2.0/nebula-studio/troubleshooting/st-ug-faq.md +++ b/docs-2.0/nebula-studio/troubleshooting/st-ug-faq.md @@ -4,7 +4,7 @@ If you find that a function cannot be used, it is recommended to troubleshoot the problem according to the following steps: - 1. Confirm that Nebula Graph is the latest version. If you use Docker Compose to deploy the Nebula Graph database, it is recommended to run `docker-compose pull && docker-compose up -d` to pull the latest Docker image and start the container. + 1. Confirm that NebulaGraph is the latest version. If you use Docker Compose to deploy the NebulaGraph database, it is recommended to run `docker-compose pull && docker-compose up -d` to pull the latest Docker image and start the container. 2. Confirm that Studio is the latest version. For more information, refer to [check updates](../about-studio/st-ug-release-note.md). diff --git a/docs-2.0/reuse/source_connect-to-nebula-graph.md b/docs-2.0/reuse/source_connect-to-nebula-graph.md index 1a3963a0486..4d81742d23c 100644 --- a/docs-2.0/reuse/source_connect-to-nebula-graph.md +++ b/docs-2.0/reuse/source_connect-to-nebula-graph.md @@ -1,22 +1,22 @@ -This topic provides basic instruction on how to use the native CLI client Nebula Console to connect to Nebula Graph. +This topic provides basic instruction on how to use the native CLI client Nebula Console to connect to NebulaGraph. !!! caution - When connecting to Nebula Graph for the first time, you must [register the Storage Service](../2.quick-start/3.1add-storage-hosts.md) before querying data. + When connecting to NebulaGraph for the first time, you must [register the Storage Service](../2.quick-start/3.1add-storage-hosts.md) before querying data. -Nebula Graph supports multiple types of clients, including a CLI client, a GUI client, and clients developed in popular programming languages. For more information, see the [client list](../14.client/1.nebula-client.md). +NebulaGraph supports multiple types of clients, including a CLI client, a GUI client, and clients developed in popular programming languages. For more information, see the [client list](../14.client/1.nebula-client.md). ## Prerequisites -* You have started [Nebula Graph services](https://docs.nebula-graph.io/{{nebula.release}}/4.deployment-and-installation/manage-service/). +* You have started [NebulaGraph services](https://docs.nebula-graph.io/{{nebula.release}}/4.deployment-and-installation/manage-service/). -* The machine on which you plan to run Nebula Console has network access to the Graph Service of Nebula Graph. +* The machine on which you plan to run Nebula Console has network access to the Graph Service of NebulaGraph. -* The Nebula Console version is compatible with the Nebula Graph version. +* The Nebula Console version is compatible with the NebulaGraph version. !!! note - Nebula Console and Nebula Graph of the same version number are the most compatible. There may be compatibility issues when connecting to Nebula Graph with a different version of Nebula Console. The error message `incompatible version between client and server` is displayed when there is such an issue. + Nebula Console and NebulaGraph of the same version number are the most compatible. There may be compatibility issues when connecting to NebulaGraph with a different version of Nebula Console. The error message `incompatible version between client and server` is displayed when there is such an issue. ### Steps @@ -46,7 +46,7 @@ Nebula Graph supports multiple types of clients, including a CLI client, a GUI c 5. In the command line interface, change the working directory to the one where the nebula-console binary file is stored. -6. Run the following command to connect to Nebula Graph. +6. Run the following command to connect to NebulaGraph. * For Linux or macOS: @@ -67,14 +67,14 @@ Nebula Graph supports multiple types of clients, including a CLI client, a GUI c | Parameter | Description | | - | - | | `-h/-help` | Shows the help menu. | - | `-addr/-address` | Sets the IP address of the Graph service. The default address is 127.0.0.1. | + | `-addr/-address` | Sets the IP address of the Graph service. The default address is 127.0.0.1. | | `-P/-port` | Sets the port number of the graphd service. The default port number is 9669. | - | `-u/-user` | Sets the username of your Nebula Graph account. Before enabling authentication, you can use any existing username. The default username is `root`. | - | `-p/-password` | Sets the password of your Nebula Graph account. Before enabling authentication, you can use any characters as the password. | + | `-u/-user` | Sets the username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is `root`. | + | `-p/-password` | Sets the password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. | | `-t/-timeout` | Sets an integer-type timeout threshold of the connection. The unit is second. The default value is 120. | | `-e/-eval` | Sets a string-type nGQL statement. The nGQL statement is executed once the connection succeeds. The connection stops after the result is returned. | | `-f/-file` | Sets the path of an nGQL file. The nGQL statements in the file are executed once the connection succeeds. The result will be returned and the connection stops then. | - | `-enable_ssl` | Enables SSL encryption when connecting to Nebula Graph. | + | `-enable_ssl` | Enables SSL encryption when connecting to NebulaGraph. | | `-ssl_root_ca_path` | Sets the storage path of the certification authority file. | | `-ssl_cert_path` | Sets the storage path of the certificate file. | | `-ssl_private_key_path` | Sets the storage path of the private key file. | diff --git a/docs-2.0/reuse/source_install-nebula-graph-by-rpm-or-deb.md b/docs-2.0/reuse/source_install-nebula-graph-by-rpm-or-deb.md index 3ba3cd7b002..425c25fad9f 100644 --- a/docs-2.0/reuse/source_install-nebula-graph-by-rpm-or-deb.md +++ b/docs-2.0/reuse/source_install-nebula-graph-by-rpm-or-deb.md @@ -1,8 +1,8 @@ -RPM and DEB are common package formats on Linux systems. This topic shows how to quickly install Nebula Graph with the RPM or DEB package. +RPM and DEB are common package formats on Linux systems. This topic shows how to quickly install NebulaGraph with the RPM or DEB package. !!! note - The console is not complied or packaged with Nebula Graph server binaries. You can install [nebula-console](https://github.com/vesoft-inc/nebula-console) by yourself. + The console is not complied or packaged with NebulaGraph server binaries. You can install [nebula-console](https://github.com/vesoft-inc/nebula-console) by yourself. !!! enterpriseonly @@ -100,8 +100,8 @@ Wget installed. * Download the release version. - + On the [Nebula Graph Releases](https://github.com/vesoft-inc/nebula-graph/releases) page, find the required version and click **Assets**. - ![Select a Nebula Graph release version](https://docs-cdn.nebula-graph.com.cn/figures/console-1.png) + + On the [NebulaGraph Releases](https://github.com/vesoft-inc/nebula-graph/releases) page, find the required version and click **Assets**. + ![Select a NebulaGraph release version](https://docs-cdn.nebula-graph.com.cn/figures/console-1.png) + In the **Assets** area, click the package to download it. @@ -111,14 +111,14 @@ Wget installed. Nightly versions are usually used to test new features. Do not use it in a production environment. - + On the [Nebula Graph package](https://github.com/vesoft-inc/nebula/actions/workflows/package.yaml) page, click the latest **package** on the top of the package list. + + On the [NebulaGraph package](https://github.com/vesoft-inc/nebula/actions/workflows/package.yaml) page, click the latest **package** on the top of the package list. - ![Select a Nebula Graph nightly version](https://github.com/vesoft-inc/nebula-docs/blob/master/docs-2.0/figs/4.deployment-and-installation/2.complie-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb/nightly-page.png?raw=true) + ![Select a NebulaGraph nightly version](https://github.com/vesoft-inc/nebula-docs/blob/master/docs-2.0/figs/4.deployment-and-installation/2.complie-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb/nightly-page.png?raw=true) + In the **Artifacts** area, click the package to download it. --> -## Install Nebula Graph +## Install NebulaGraph * Use the following syntax to install with an RPM package. @@ -141,7 +141,7 @@ Wget installed. ``` !!! note - Customizing the installation path is not supported when installing Nebula Graph with a DEB package. The default installation path is `/usr/local/nebula/`. + Customizing the installation path is not supported when installing NebulaGraph with a DEB package. The default installation path is `/usr/local/nebula/`. For example, to install a DEB package for the {{nebula.release}} version, run the following command. @@ -157,6 +157,6 @@ Wget installed. - (Enterprise Edition)[Deploy license](https://docs.nebula-graph.com.cn/{{nebula.release}}/4.deployment-and-installation/deploy-license) -- [Start Nebula Graph](https://docs.nebula-graph.io/{{nebula.release}}/2.quick-start/5.start-stop-service/) +- [Start NebulaGraph](https://docs.nebula-graph.io/{{nebula.release}}/2.quick-start/5.start-stop-service/) -- [Connect to Nebula Graph](https://docs.nebula-graph.io/{{nebula.release}}/2.quick-start/3.connect-to-nebula-graph/) +- [Connect to NebulaGraph](https://docs.nebula-graph.io/{{nebula.release}}/2.quick-start/3.connect-to-nebula-graph/) diff --git a/docs-2.0/reuse/source_manage-service.md b/docs-2.0/reuse/source_manage-service.md index 49df6049eca..f3bd03f08c5 100644 --- a/docs-2.0/reuse/source_manage-service.md +++ b/docs-2.0/reuse/source_manage-service.md @@ -1,8 +1,8 @@ -Nebula Graph supports managing services with scripts or systemd. This topic will describe the two methods in detail. +NebulaGraph supports managing services with scripts or systemd. This topic will describe the two methods in detail. !!! enterpriseonly - Managing Nebula Graph with systemd is only available in the Nebula Graph Enterprise Edition. + Managing NebulaGraph with systemd is only available in the NebulaGraph Enterprise Edition. !!! danger @@ -10,7 +10,7 @@ Nebula Graph supports managing services with scripts or systemd. This topic will ## Manage services with script -You can use the `nebula.service` script to start, stop, restart, terminate, and check the Nebula Graph services. +You can use the `nebula.service` script to start, stop, restart, terminate, and check the NebulaGraph services. !!! note @@ -37,15 +37,15 @@ $ sudo /usr/local/nebula/scripts/nebula.service |`metad`|Set the Meta Service as the target service.| |`graphd`|Set the Graph Service as the target service.| |`storaged`|Set the Storage Service as the target service.| -|`all`|Set all the Nebula Graph services as the target services.| +|`all`|Set all the NebulaGraph services as the target services.| ## Manage services with systemd -For easy maintenance, Nebula Graph Enterprise Edition supports managing services with systemd. You can start, stop, restart, and check services with `systemctl` commands. +For easy maintenance, NebulaGraph Enterprise Edition supports managing services with systemd. You can start, stop, restart, and check services with `systemctl` commands. !!! note - - After installing Nebula Graph Enterprise Edition, the `.service` files required by systemd are located in the `etc/unit` path in the installation directory. Nebula Graph installed with the RPM/DEB package automatically places the `.service` files into the path `/usr/lib/systemd/system` and the parameter `ExecStart` is generated based on the specified Nebula Graph installation path, so you can use `systemctl` commands directly. + - After installing NebulaGraph Enterprise Edition, the `.service` files required by systemd are located in the `etc/unit` path in the installation directory. NebulaGraph installed with the RPM/DEB package automatically places the `.service` files into the path `/usr/lib/systemd/system` and the parameter `ExecStart` is generated based on the specified NebulaGraph installation path, so you can use `systemctl` commands directly. - The `systemctl` commands cannot be used to manage the Enterprise Edition cluster that is created with Dashboard of the Enterprise Edition. @@ -63,16 +63,16 @@ $ systemctl +[Connect to NebulaGraph](https://docs.nebula-graph.io/{{nebula.release}}/2.quick-start/3.connect-to-nebula-graph/) diff --git a/docs-2.0/synchronization-and-migration/2.balance-syntax.md b/docs-2.0/synchronization-and-migration/2.balance-syntax.md index 6d56cf232ea..ef0cb41a674 100644 --- a/docs-2.0/synchronization-and-migration/2.balance-syntax.md +++ b/docs-2.0/synchronization-and-migration/2.balance-syntax.md @@ -1,6 +1,6 @@ # BALANCE syntax -The `BALANCE` statements support the load balancing operations of the Nebula Graph Storage services. For more information about storage load balancing and examples for using the `BALANCE` statements, see [Storage load balance](../8.service-tuning/load-balance.md). +The `BALANCE` statements support the load balancing operations of the NebulaGraph Storage services. For more information about storage load balancing and examples for using the `BALANCE` statements, see [Storage load balance](../8.service-tuning/load-balance.md). The `BALANCE` statements are listed as follows. diff --git a/docs-2.0/synchronization-and-migration/replication-between-clusters.md b/docs-2.0/synchronization-and-migration/replication-between-clusters.md index 160902f2fcb..a007eafecc7 100644 --- a/docs-2.0/synchronization-and-migration/replication-between-clusters.md +++ b/docs-2.0/synchronization-and-migration/replication-between-clusters.md @@ -1,6 +1,6 @@ # Synchronize between two clusters -Nebula Graph supports data synchronization from a primary cluster to a secondary cluster in almost real-time. It applies to scenarios such as disaster recovery and load balancing, and helps reduce the risk of data loss and enhance data security. +NebulaGraph supports data synchronization from a primary cluster to a secondary cluster in almost real-time. It applies to scenarios such as disaster recovery and load balancing, and helps reduce the risk of data loss and enhance data security. !!! enterpriseonly @@ -28,7 +28,7 @@ The synchronization works as follows: - The synchronization is based on graph spaces, i.e., from one graph space in the primary cluster to another in the secondary cluster. -- About the synchronization topology, Nebula Graph: +- About the synchronization topology, NebulaGraph: - Supports synchronizing from one primary cluster to one secondary cluster, but not multiple primary clusters to one secondary cluster. @@ -53,7 +53,7 @@ The synchronization works as follows: The listener and drainer can be deployed in a standalone way, or on the machines hosting the primary and secondary clusters. The latter way can increase the machine load and decrease the service performance. -- Prepare the license file for the Nebula Graph Enterprise Edition. +- Prepare the license file for the NebulaGraph Enterprise Edition. ## Test environment @@ -74,9 +74,9 @@ The test environment for the operation example in this topic is as follows: ### Step 1: Set up the clusters, listeners, and drainer -1. Install Nebula Graph on all the machines. +1. Install NebulaGraph on all the machines. - For installation instructions, see [Install Nebula Graph](../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md). + For installation instructions, see [Install NebulaGraph](../4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md). 2. Modify the configuration files on all the machines. @@ -93,9 +93,9 @@ The test environment for the operation example in this topic is as follows: For more information about the configurations, see [Configurations](../5.configurations-and-logs/1.configurations/1.configurations.md). -3. On the machines of the primary cluster, secondary cluster, and listeners, upload the license files into the `share/resources/` directories in the Nebula Graph installation directories. +3. On the machines of the primary cluster, secondary cluster, and listeners, upload the license files into the `share/resources/` directories in the NebulaGraph installation directories. -4. Go to the Nebula Graph installation directories on the machines and start the needed services. +4. Go to the NebulaGraph installation directories on the machines and start the needed services. - On the primary and secondary machines, run `sudo scripts/nebula.service start all`. diff --git a/mkdocs.yml b/mkdocs.yml index a69630e25de..b606edeb339 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -1,12 +1,12 @@ # Project information -site_name: Nebula Graph Database Manual -site_description: Documentation for Nebula Graph Database -site_author: Nebula Graph +site_name: NebulaGraph Database Manual +site_description: Documentation for NebulaGraph Database +site_author: NebulaGraph site_url: https://docs.nebula-graph.io/ docs_dir: docs-2.0 repo_name: 'vesoft-inc/nebula-docs' repo_url: 'https://github.com/vesoft-inc/nebula-docs' -copyright: Copyright © 2022 Nebula Graph +copyright: Copyright © 2022 NebulaGraph # modify edit_uri: 'edit/v3.1.0/docs-2.0/' @@ -236,11 +236,11 @@ nav: - Introduction to graphs: 1.introduction/0-0-graph.md - Graph databases: 1.introduction/0-1-graph-database.md - Related technologies: 1.introduction/0-2.relates.md - - What is Nebula Graph: 1.introduction/1.what-is-nebula-graph.md + - What is NebulaGraph: 1.introduction/1.what-is-nebula-graph.md - Data model: 1.introduction/2.data-model.md - Path: 1.introduction/2.1.path.md - VID: 1.introduction/3.vid.md - - Nebula Graph architecture: + - NebulaGraph architecture: - Architecture overview: 1.introduction/3.nebula-graph-architecture/1.architecture-overview.md - Meta Service: 1.introduction/3.nebula-graph-architecture/2.meta-service.md - Graph Service: 1.introduction/3.nebula-graph-architecture/3.graph-service.md @@ -248,9 +248,9 @@ nav: - Quick start: - Quick start workflow: 2.quick-start/1.quick-start-workflow.md - - Step 1 Install Nebula Graph: 2.quick-start/2.install-nebula-graph.md - - Step 2 Manage Nebula Graph Service: 2.quick-start/5.start-stop-service.md - - Step 3 Connect to Nebula Graph: 2.quick-start/3.connect-to-nebula-graph.md + - Step 1 Install NebulaGraph: 2.quick-start/2.install-nebula-graph.md + - Step 2 Manage NebulaGraph Service: 2.quick-start/5.start-stop-service.md + - Step 3 Connect to NebulaGraph: 2.quick-start/3.connect-to-nebula-graph.md - Step 4 Register the Storage Service: 2.quick-start/3.1add-storage-hosts.md - Step 5 Use nGQL (CRUD): 2.quick-start/4.nebula-graph-crud.md - nGQL cheatsheet: 2.quick-start/6.cheatsheet-for-ngql.md @@ -412,21 +412,21 @@ nav: - Resource preparations: 4.deployment-and-installation/1.resource-preparations.md - Compile and install Nebula Graph: - Install Nebula Graph by compiling the source code: 4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md - - Install Nebula Graph with RPM or DEB package: 4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md - - Install Nebula Graph with the tar.gz file: 4.deployment-and-installation/2.compile-and-install-nebula-graph/4.install-nebula-graph-from-tar.md - - Deploy Nebula Graph with Docker Compose: 4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md - - Deploy a Nebula Graph cluster on multiple servers: 4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md + - Install NebulaGraph with RPM or DEB package: 4.deployment-and-installation/2.compile-and-install-nebula-graph/2.install-nebula-graph-by-rpm-or-deb.md + - Install NebulaGraph with the tar.gz file: 4.deployment-and-installation/2.compile-and-install-nebula-graph/4.install-nebula-graph-from-tar.md + - Deploy NebulaGraph with Docker Compose: 4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md + - Deploy a NebulaGraph cluster on multiple servers: 4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md - Deploy Nebula Grpah with ecosystem tools: 4.deployment-and-installation/2.compile-and-install-nebula-graph/6.deploy-nebula-graph-with-peripherals.md - - Deploy standalone Nebula Graph: 4.deployment-and-installation/standalone-deployment.md + - Deploy standalone NebulaGraph: 4.deployment-and-installation/standalone-deployment.md - Deploy license: 4.deployment-and-installation/deploy-license.md - Manage Service: 4.deployment-and-installation/manage-service.md - Connect to Service: 4.deployment-and-installation/connect-to-nebula-graph.md - Manage Storage host: 4.deployment-and-installation/manage-storage-host.md # - Manage zone: 4.deployment-and-installation/5.zone.md - Upgrade: - - Upgrade Nebula Graph to the latest version: 4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md -# - Upgrade Nebula Graph from v2.0.x to the current version: 4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-from-200-to-latest.md - - Uninstall Nebula Graph: 4.deployment-and-installation/4.uninstall-nebula-graph.md + - Upgrade NebulaGraph to the latest version: 4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md +# - Upgrade NebulaGraph from v2.0.x to the current version: 4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-from-200-to-latest.md + - Uninstall NebulaGraph: 4.deployment-and-installation/4.uninstall-nebula-graph.md - Configurations and logs: - Configurations: @@ -440,7 +440,7 @@ nav: - Audit logs(Enterprise): 5.configurations-and-logs/2.log-management/audit-log.md - Monitor and metrics: - - Query Nebula Graph metrics: 6.monitor-and-metrics/1.query-performance-metrics.md + - Query NebulaGraph metrics: 6.monitor-and-metrics/1.query-performance-metrics.md - RocksDB Statistics: 6.monitor-and-metrics/2.rocksdb-statistics.md - Data security: - Authentication and authorization: @@ -479,10 +479,10 @@ nav: - Nebula Python: 14.client/5.nebula-python-client.md - Nebula Go: 14.client/6.nebula-go-client.md -# - Nebula Graph Cloud: nebula-cloud.md +# - NebulaGraph Cloud: nebula-cloud.md - NebulaGraph Cloud: - - What is Nebula Graph Cloud: nebula-cloud/1.what-is-cloud.md + - What is NebulaGraph Cloud: nebula-cloud/1.what-is-cloud.md - NebulaGraph on AWS: - NebulaGraph on AWS overview: nebula-cloud/nebula-cloud-on-aws/1.aws-overview.md - Deployment architecture: nebula-cloud/nebula-cloud-on-aws/2.aws-architecture.md @@ -516,7 +516,7 @@ nav: - Deploy and connect: - Deploy Studio: nebula-studio/deploy-connect/st-ug-deploy.md - Deploy Studio with Helm: nebula-studio/deploy-connect/st-ug-deploy-by-helm.md - - Connect to Nebula Graph: nebula-studio/deploy-connect/st-ug-connect.md + - Connect to NebulaGraph: nebula-studio/deploy-connect/st-ug-connect.md - Quick start: - Design a schema: nebula-studio/quick-start/st-ug-plan-schema.md - Create a schema: nebula-studio/quick-start/st-ug-create-schema.md @@ -567,7 +567,7 @@ nav: - What is Nebula Explorer: nebula-explorer/about-explorer/ex-ug-what-is-explorer.md - Deploy and connect: - Deploy Explorer: nebula-explorer/deploy-connect/ex-ug-deploy.md - - Connect to Nebula Graph: nebula-explorer/deploy-connect/ex-ug-connect.md + - Connect to NebulaGraph: nebula-explorer/deploy-connect/ex-ug-connect.md - Nebula Explorer License: nebula-explorer/deploy-connect/3.explorer-license.md - Page overview: nebula-explorer/ex-ug-page-overview.md - Database management: @@ -628,7 +628,7 @@ nav: - Import data from Pulsar: nebula-exchange/use-exchange/ex-ug-import-from-pulsar.md - Import data from Kafka: nebula-exchange/use-exchange/ex-ug-import-from-kafka.md - Import data from SST files: nebula-exchange/use-exchange/ex-ug-import-from-sst.md - - Export data from Nebula Graph: nebula-exchange/use-exchange/ex-ug-export-from-nebula.md + - Export data from NebulaGraph: nebula-exchange/use-exchange/ex-ug-export-from-nebula.md - Exchange FAQ: nebula-exchange/ex-ug-FAQ.md - Nebula Operator: @@ -639,11 +639,11 @@ nav: - Deploy clusters with Kubectl: nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md - Deploy clusters with Helm: nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md - Configure clusters: - - Custom configuration parameters for a Nebula Graph cluster: nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md + - Custom configuration parameters for a NebulaGraph cluster: nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md - Reclaim PVs: nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md - Balance storage data after scaling out: nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md - - Upgrade Nebula Graph clusters: nebula-operator/9.upgrade-nebula-cluster.md - - Connect to Nebula Graph databases: nebula-operator/4.connect-to-nebula-graph-service.md + - Upgrade NebulaGraph clusters: nebula-operator/9.upgrade-nebula-cluster.md + - Connect to NebulaGraph databases: nebula-operator/4.connect-to-nebula-graph-service.md - Self-healing: nebula-operator/5.operator-failover.md - FAQ: nebula-operator/7.operator-faq.md diff --git a/third-party-lib-licenses.md b/third-party-lib-licenses.md index 98d69282716..9ded312389f 100644 --- a/third-party-lib-licenses.md +++ b/third-party-lib-licenses.md @@ -1,6 +1,6 @@ # Other Licenses -A list of the licenses of the libraries that are used for building Nebula Graph. +A list of the licenses of the libraries that are used for building NebulaGraph. ## libreadline diff --git a/translation/2.plato_doc.md b/translation/2.plato_doc.md index 4584bb17332..5ec4e6efc66 100644 --- a/translation/2.plato_doc.md +++ b/translation/2.plato_doc.md @@ -1,7 +1,7 @@ -# Using the High-Performance Graph Computing System 'Plato' with Nebula Graph +# Using the High-Performance Graph Computing System 'Plato' with NebulaGraph +![高性能图计算系统 Plato 在 NebulaGraph 中的实践](https://www-cdn.nebula-graph.com.cn/nebula-blog/kv-seperation-00.jpeg) --> ## 1. Introduction to Graph Computation @@ -65,7 +65,7 @@ In the Process Communication phase of BSP, each node `Node_i` sends data to its For more details, see [*Gemini: A Computation-Centric Distributed Graph Processing System*](https://www.usenix.org/conference/osdi16/technical-sessions/presentation/zhu). -## 3. Integration of Plato with Nebula Graph +## 3. Integration of Plato with NebulaGraph ### 3.1 Introduction to the Graph Computation System 'Plato' @@ -74,15 +74,15 @@ Plato is Tencent's open-source industrial-grade graph computation system. It can -### 3.2 Integration with Nebula Graph +### 3.2 Integration with NebulaGraph -We performed secondary development on Plato to access the Nebula Graph data source. +We performed secondary development on Plato to access the NebulaGraph data source. -#### 3.2.1 Nebula Graph as the Input and Output data source +#### 3.2.1 NebulaGraph as the Input and Output data source -Based on Plato, Nebula Graph databases are added as a new input and output data source, which allows you to read data directly from Nebula Graph for graph computation and writes the result back to Nebula Graph. +Based on Plato, NebulaGraph databases are added as a new input and output data source, which allows you to read data directly from NebulaGraph for graph computation and writes the result back to NebulaGraph. -Nebula Graph storage layer provides a scan interface for partitions, which makes it easy to scan vertex and edge data in bulk. +NebulaGraph storage layer provides a scan interface for partitions, which makes it easy to scan vertex and edge data in bulk. ```cpp ScanEdgeIter scanEdgeWithPart(std::string spaceName, @@ -107,11 +107,11 @@ ScanVertexIter scanVertexWithPart(std::string spaceName, bool onlyLatestVersion = false, bool enableReadFromFollower = true); ``` -We first obtain the partition distribution information in the specified graph space and assign the scan task of each partition to each node in the Plato cluster, and each node further assigns the scan tasks to each thread running on that node to achieve fast data reading in parallel. After the graph computation, the result is written to Nebula Graph through a Nebula Graph client in parallel. +We first obtain the partition distribution information in the specified graph space and assign the scan task of each partition to each node in the Plato cluster, and each node further assigns the scan tasks to each thread running on that node to achieve fast data reading in parallel. After the graph computation, the result is written to NebulaGraph through a NebulaGraph client in parallel. #### 3.2.2 Distributed ID Encoder -Gemini and Plato require that vertex IDs are incremented continuously from 0, but most real data of vertex IDs do not meet this requirement, especially since Nebula Graph supports string type IDs from version 2.0. +Gemini and Plato require that vertex IDs are incremented continuously from 0, but most real data of vertex IDs do not meet this requirement, especially since NebulaGraph supports string type IDs from version 2.0. Therefore, we need to convert the original IDs from integer or string types to the continuously increasing integer starting from 0. Plato internally implements a single-node version of the ID encoder, where each node in the Plato cluster redundantly stores the mapping relationships of all IDs. When the number of vertices is large, each node requires hundreds of GB of memory just to store the ID mapping table, so we need to implement a distributed ID mapper that slices the ID mapping relationships into multiple replicas to store them separately. @@ -119,7 +119,7 @@ We hash the original IDs across different nodes and globally assign the continuo #### 3.2.3 Supplementary Algorithms -We added SSSP, APSP, Jaccard Similarity, and Triangle Count algorithms based on open-source Plato, and each algorithm supports input and output to the Nebula Graph data source. The currently supported algorithms are as follows: +We added SSSP, APSP, Jaccard Similarity, and Triangle Count algorithms based on open-source Plato, and each algorithm supports input and output to the NebulaGraph data source. The currently supported algorithms are as follows: | File Name | Algorithm Name | Category | | :----- | ------ |------ | @@ -200,7 +200,7 @@ exit $? Parameter Description -- The `INPUT` and `OUTPUT` parameters specify the input and output data sources of the algorithm. Currently, local CSV files, HDFS files, and Nebula Graph are supported as the data source. When the input and output data source is Nebula Graph, the value of `INPUT` and `OUTPUT` parameters take the form `nebula:/path/to/ nebula.conf`. +- The `INPUT` and `OUTPUT` parameters specify the input and output data sources of the algorithm. Currently, local CSV files, HDFS files, and NebulaGraph are supported as the data source. When the input and output data source is NebulaGraph, the value of `INPUT` and `OUTPUT` parameters take the form `nebula:/path/to/ nebula.conf`. - `WNUM` specifies the sum of the number of processes running on all nodes in the cluster. It is recommended to be the number of nodes in the cluster or the number of nodes in the NUMA architecture. @@ -208,27 +208,27 @@ Parameter Description ``` ## Read/Write ---retry=3 # The number of retries connecting to Nebula Graph. +--retry=3 # The number of retries connecting to NebulaGraph. --space=sf30 # The name of the graph space that can be read from and written to. -## Read from Nebula Graph ---meta_server_addrs=192.168.8.94:9559 # The address of the metad process in Nebula Graph. +## Read from NebulaGraph +--meta_server_addrs=192.168.8.94:9559 # The address of the metad process in NebulaGraph. --edge=LIKES # The name of edges to be read. #--edge_data_field # The name of the property to be read as the weight of the edge. --read_batch_size=10000 # The size of batch for each scan. -## Write to Nebula Graph ---graph_server_addrs=192.168.8.94:9669 # The address of the graphd process in Nebula Graph. ---user=root # The account to log into Nebula Graph. ---password=nebula # The password to log into Nebula Graph. +## Write to NebulaGraph +--graph_server_addrs=192.168.8.94:9669 # The address of the graphd process in NebulaGraph. +--user=root # The account to log into NebulaGraph. +--password=nebula # The password to log into NebulaGraph. # Insert or Update ---mode=insert # The pattern used to write data back to Nebula Graph: Insert or Update. ---tag=pagerank # The tag name written back to Nebula Graph. ---prop=pr # The property name corresponding to the tag that is written back to Nebula Graph. ---type=double # The property type corresponding to the tag written back to Nebula Graph. +--mode=insert # The pattern used to write data back to NebulaGraph: Insert or Update. +--tag=pagerank # The tag name written back to NebulaGraph. +--prop=pr # The property name corresponding to the tag that is written back to NebulaGraph. +--type=double # The property type corresponding to the tag written back to NebulaGraph. --write_batch_size=1000 # The size of back per write. ---err_file=/home/plato/err.txt # The file path where the data failed to be written back to Nebula Graph is stored. +--err_file=/home/plato/err.txt # The file path where the data failed to be written back to NebulaGraph is stored. ``` #### scripts/cluster @@ -240,6 +240,6 @@ The `cluster` file specifies the IPs of cluster nodes where the algorithm runs. 192.168.15.6 ``` -The above is the application of Plato in Nebula Graph. The above-mentioned Plato, named Nebula Analytics in Nebula Graph, is only available for the Nebula Graph Enterprise Edition. If you are using the open-source version of Nebula Graph, you need to implement the Nebula Graph data reading and writing with Plato feature by yourself. +The above is the application of Plato in NebulaGraph. The above-mentioned Plato, named Nebula Analytics in NebulaGraph, is only available for the NebulaGraph Enterprise Edition. If you are using the open-source version of NebulaGraph, you need to implement the NebulaGraph data reading and writing with Plato feature by yourself. --- \ No newline at end of file diff --git a/translation/310-performance-report.md b/translation/310-performance-report.md index d75a37ff099..975517b868d 100644 --- a/translation/310-performance-report.md +++ b/translation/310-performance-report.md @@ -1,10 +1,10 @@ -# Nebula Graph v3.1.0 Performance Report +# NebulaGraph v3.1.0 Performance Report -This is a performance report for Nebula Graph [v3.1.0](https://github.com/vesoft-inc/nebula/tree/release-3.1). +This is a performance report for NebulaGraph [v3.1.0](https://github.com/vesoft-inc/nebula/tree/release-3.1). ## Test result -The query and data import performance of Nebula Graph v3.1.0 is almost the same as that of v3.0.0. Some new test cases have been added to this test for `MATCH` statements that have been optimized for property reading, and the property reading performance is significantly improved compared to v3.0.0. +The query and data import performance of NebulaGraph v3.1.0 is almost the same as that of v3.0.0. Some new test cases have been added to this test for `MATCH` statements that have been optimized for property reading, and the property reading performance is significantly improved compared to v3.0.0. ## Test environment @@ -35,7 +35,7 @@ The Linked Data Benchmark Council (LDBC) is a project that aims to develop bench 2. The `vu` in `50_vu` and `100_vu` on the horizontal axis indicates `virtual user`, i.e. the number of concurrent users in the performance test. `50_vu` indicates 50 concurrent users, `100_vu` indicates 100 concurrent users, and so on. -3. Nebula Graph v3.0.0 is used as the performance baseline. +3. NebulaGraph v3.0.0 is used as the performance baseline. 4. ResponseTime = server-side processing time + network delay time + client-side deserializing time. @@ -87,4 +87,4 @@ MATCH (m)-[:KNOWS]-(n) WHERE id(m)=={} OPTIONAL MATCH (n)<-[:KNOWS]-(l) RETURN l MATCH (m)-[:KNOWS]-(n) WHERE id(m)=={} MATCH (n)-[:KNOWS]-(l) WITH m AS x, n AS y, l RETURN x.Person.firstName AS n1, y.Person.firstName AS n2, CASE WHEN l.Person.firstName is not null THEN l.Person.firstName WHEN l.Person.gender is not null THEN l.Person.birthday ELSE 'null' END AS n3 ORDER BY n1, n2, n3 LIMIT 10 -You are welcome to check out GitHub for [Nebula Graph v3.1.0](https://github.com/vesoft-inc/nebula/releases/tag/v3.1.0). +You are welcome to check out GitHub for [NebulaGraph v3.1.0](https://github.com/vesoft-inc/nebula/releases/tag/v3.1.0).