Skip to content

Commit

Permalink
Release 3.2.0 (#371)
Browse files Browse the repository at this point in the history
* Added handling dbschema

* Added unit test for schema case

* Updated release version to 3.1.1

* Revert "Added unit test for schema case"

This reverts commit f164f6f.

* Revert "Added handling dbschema"

This reverts commit 4315611.

* added newline to version.properties

* updated to 3.2.0

* updated contributing.md
  • Loading branch information
Aryex authored Apr 18, 2022
1 parent 2fd2d23 commit a60009b
Show file tree
Hide file tree
Showing 7 changed files with 10 additions and 9 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ jobs:
uses: actions/upload-artifact@v1
with:
name: build-jar-file
path: /home/runner/work/spark-connector/spark-connector/connector/target/scala-2.12/spark-vertica-connector_2.12-3.1.0.jar
path: /home/runner/work/spark-connector/spark-connector/connector/target/scala-2.12/spark-vertica-connector_2.12-3.2.0.jar
run-analysis:
runs-on: ubuntu-latest
needs: build
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/s3-integration.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ jobs:
uses: actions/upload-artifact@v1
with:
name: build-jar-file
path: /home/runner/work/spark-connector/spark-connector/connector/target/scala-2.12/spark-vertica-connector_2.12-3.1.0.jar
path: /home/runner/work/spark-connector/spark-connector/connector/target/scala-2.12/spark-vertica-connector_2.12-3.2.0.jar
run-integration-tests_s3:
runs-on: ubuntu-latest
needs: build
Expand Down
6 changes: 3 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ cd connector
sbt assembly
```

Running this will run all unit tests and build the jar to target/[SCALA_VERSION]/spark-vertica-connector-assembly-3.1.0.jar
Running this will run all unit tests and build the jar to target/scala-2.12/spark-vertica-connector-assembly-3.2.0.jar

## Step 4: Set up an environment
The easiest way to set up an environment is to spin up the docker containers for a sandbox client environment and single-node clusters for both Vertica and HDFS following [this guide.](https://github.com/vertica/spark-connector/blob/main/examples/README.md)
Expand All @@ -88,7 +88,7 @@ The next requirement is a spark application that uses the connector jar. Example
```shell
cd examples/basic-read
mkdir lib
cp ../../connector/target/scala-2.12/spark-vertica-connector-assembly-3.1.0.jar lib
cp ../../connector/target/scala-2.12/spark-vertica-connector-assembly-3.2.0.jar lib
sbt run
```

Expand Down Expand Up @@ -233,7 +233,7 @@ This project contains a series of end-to-end tests. It also contains tests for c

If you set the sparkVersion in build.sbt to 3.0.0, you will also need to use hadoop-hdfs version 2.7.0 when running `sbt run` to run the integration tests.

Similarly, if you set the sparkVersion in build.sbt to 3.1.0, you will also need to use hadoop-hdfs version 3.3.0.
Similarly, if you set the sparkVersion in build.sbt to 3.2.0, you will also need to use hadoop-hdfs version 3.3.0.

If a change is made to one of those bottom-layer components (ie VerticaJdbcLayer, FileStoreLayer), integration tests should be included. Additionally, if a change is large and touches many components of the connector, integration tests should be included.

Expand Down
2 changes: 1 addition & 1 deletion examples/kerberos-example/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ sbt assembly
Then create a `lib` folder at `/kerberos-example` and put the spark connector that you assembled inside.
```
mkdir /spark-connector/examples/kerberos-example/lib
cp /spark-connector/connector/target/scala-2.12/spark-vertica-connector-assembly-3.1.0.jar /spark-connector/examples/kerberos-example/lib
cp /spark-connector/connector/target/scala-2.12/spark-vertica-connector-assembly-3.2.0.jar /spark-connector/examples/kerberos-example/lib
```
Then in the example's `build.sbt`, comment out the vertica-spark connector dependency.

Expand Down
2 changes: 1 addition & 1 deletion examples/sparklyr/run.r
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ if (file.exists("done")) unlink("done")

# Create the Spark config and give access to our connector jar file
config <- spark_config()
config$sparklyr.jars.default <- "../../connector/target/scala-2.12/spark-vertica-connector-assembly-3.1.0.jar"
config$sparklyr.jars.default <- "../../connector/target/scala-2.12/spark-vertica-connector-assembly-3.2.0.jar"

# Submit a new Spark job that executes sparkapp.r with Spark version 3.1
spark_submit(master = "spark://localhost:7077", version = "3.1", file = "sparkapp.r", config = config)
Expand Down
2 changes: 1 addition & 1 deletion functional-tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Configuration is specified with application.conf (HOCON format)
From the functional-tests directory, run the following commands:
```
mkdir lib
cd ../connector && sbt assembly && cp target/scala-2.12/spark-vertica-connector-assembly-3.1.0.jar ../functional-tests/lib && cd ../functional-tests
cd ../connector && sbt assembly && cp target/scala-2.12/spark-vertica-connector-assembly-3.2.0.jar ../functional-tests/lib && cd ../functional-tests
```
This will create a lib folder and then build and copy the connector JAR file to it.

Expand Down
3 changes: 2 additions & 1 deletion version.properties
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
connector-version=3.1.0
connector-version=3.2.0

0 comments on commit a60009b

Please sign in to comment.