You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I remove the remove_alias from the spark-shell launcher, the spark-cassandra-connector works but later when spark "deploy" the submitted code to its cluster of worker, there is an error because the nodes try to use their internal names to communicate.
Seems like a linking problem. I will trying to remove the remove_alias workaround and to link each node properly for them to communicate using their names
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for sharing your work, I am now running a spark cluster in minutes on my laptop, cool!
But when I submit some code to the cluster, it fails because of the remove_alias workaround.
When I try to use InetAddress.getLocalHost.getHostAddress I face an exception:
The code I am using is the spark-cassandra-connector. The exception is thrown here: https://github.com/datastax/spark-cassandra-connector/blob/v1.3.0-RC1/spark-cassandra-connector/src/main/scala/com/datastax/spark/connector/cql/CassandraConnectorConf.scala#L108
If I remove the remove_alias from the spark-shell launcher, the spark-cassandra-connector works but later when spark "deploy" the submitted code to its cluster of worker, there is an error because the nodes try to use their internal names to communicate.
Seems like a linking problem. I will trying to remove the remove_alias workaround and to link each node properly for them to communicate using their names
The text was updated successfully, but these errors were encountered: