Skip to content

Commit

Permalink
add property mapping description (#148)
Browse files Browse the repository at this point in the history
* add property mapping description

* add property mapping description
  • Loading branch information
Nicole00 authored Jun 18, 2024
1 parent 02be1e0 commit 8f465cb
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 1 deletion.
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,9 @@ NebulaGraph Spark Connector support spark 2.2, 2.4 and 3.0.
<version>2.1.1</version>
</dependency>
```
When writing, please ensure that the column name of the DataFrame as an attribute is consistent with the attribute name in NebulaGraph. If they are inconsistent, please use `DataFrame.withColumnRenamed` to rename the DataFrame column name first.
Write DataFrame `INSERT` into NebulaGraph as Vertices:
```
val config = NebulaConnectionConfig
Expand Down
2 changes: 2 additions & 0 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,8 @@ Nebula Spark Connector 支持 Spark 2.2, 2.4 和 3.x.
</dependency>
```

在做写入时, 请确保作为属性的DataFrame的列名是和NebulaGraph中的属性名是一致的,如果不一致,请先使用 `DataFrame.withColumnRenamed` 进行DataFrame列名的重命名。

将 DataFrame 作为点 `INSERT` 写入 Nebula Graph :

```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,8 @@ object NebulaSparkWriterExample {
*/
def writeVertex(spark: SparkSession): Unit = {
val df = spark.read.json("vertex")
// if the column of df is name1, age, born, then we need to rename it to name first. Because the name of Nebula Property is name.
// df.withColumnRenamed("name1", "name")
df.show()

val config = getNebulaConnectionConfig()
Expand Down

0 comments on commit 8f465cb

Please sign in to comment.