To run spark job on your local machine, you need to setup Kafka and create a producer first, see
http://kafka.apache.org/documentation.html#quickstart
and then run the example
$ bin/spark-submit --jars \ external/kafka-assembly/target/scala-*/spark-streaming-kafka-assembly-*.jar \ kafka-direct-iot-sql.py \ localhost:9092 test
1.Define function to process RDDs of the json DStream to convert them to DataFrame and run SQL queries
2.Process each RDD of the DStream coming in from Kafka
3.Set number of simulated messages to generate
4.Generate JSON output