Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* [sbt] version updates * [sbt] disable build for scala 2.12 * [conf] allow not_analyzed string fields (#145) * [not-analyzed-fields] do not analyzed fields ending with _notanalyzed * [sbt] version updates * [sbt] disable build for scala 2.12 * [conf] allow not_analyzed string fields (#145) * [not-analyzed-fields] do not analyzed fields ending with _notanalyzed * Revert "Revert "Setting version to 0.3.5-SNAPSHOT"" This reverts commit a6da0af. * [build] update Lucene to 7.7.0 * Hotfix: issue 150 (#151) * Remove unused code (#141) * Revert "Setting version to 0.3.4-SNAPSHOT" This reverts commit 2f1d7be. * README: update to 0.3.3 * README: fix javadoc badge * remove unused param * [sbt] version updates * [conf] allow not_analyzed string fields (#145) * [not-analyzed-fields] do not analyzed fields ending with _notanalyzed * [hotfix] fixes issue 150 * [tests] issue 150 * fix typo * [blockEntityLinkage] drop queryPartColumns * [sbt] version updates * [scripts] fix shell * Block linkage: allow a block linker with Row to Query (#154) * [linkage] block linker with => Query * [linkage] block linker is Row => Query * remove Query analyzer on methods * [sbt] set version to 0.3.6-SNAPSHOT * Feature: allow custom analyzers during compile time (#160) * [analyzers] custom analyzer * test return null * [travis] travis_wait 1 min * Revert "[travis] travis_wait 1 min" This reverts commit c79456e. * use lucene examples * custom analyzer return null * fix java reflection * add docs * Update to Lucene 8 (#161) * [lucene] upgrade to version 8.0.0 * [lucene] remove ngram analyzer * delete ngram analyzer * minor fix * add scaladoc * LuceneRDDResponseSpec.collect() should work when no results are found - Issue #166 (#168) * [sbt] update scalatest 3.0.7 * [sbt] update spark 2.4.1 * [build.sbt] add credentials file * [plugins] update versions * [sbt] update to 0.13.18 * Allow Lucene Analyzers per field (#164) * [issue_163] per field analysis * [sbt] update scalatest to 3.0.7 * [issue_163] fix docs; order of arguments * fixes on ShapeLuceneRDD * [issue_163] fix test * issue_163: minor fix * introduce LuceneRDDParams case class * fix apply in LuceneRDDParams * [issue_163] remove duplicate apply defn * add extra LuceneRDD.apply * [issue_165] throw runtime exception; use traversable trait (#170) [issue_165] throw runtime exception; handle multi-valued fields in DataFrames * [config] refactor; add environment variables in config (#173) * [refactor] configuration loading * [travis] code hygiene * Make LuceneRDDResponse to extend RDD[Row] (#175) * WIP * fix tests * remove SparkDoc class * make test compile * use GenericRowWithSchema * tests: getDouble score * score is a float * fix casting issue with Seq[String] * tests: LuceneDocToSparkRowpec * tests: LuceneDocToSparkRowpec * more tests * LuceneDocToSparkRowpec: more tests * LuceneDocToSparkRowpec: fix tests * LuceneDocToSparkRow: fix Number type inference * LuceneDocToSparkRowpec: fix tests * implicits: remove StoredField for Numeric types * implicits: revert remove StoredField for Numeric types * fix more tests * fix more tests * [tests] fix LuceneRDDResponse .toDF() * fix multivalued fields * fix score type issue * minor * stored fields for numerics * hotfix: TextField must be stored using StoredField * hotfix: stringToDocument implicit * link issue 179 * fix tests * remove _.toRow() calls * fix compile issue * [sbt] update to spark 2.4.2 * [travis] use spark 2.4.2
- Loading branch information