pyspark中操作spark sql问题

jxk 2017-08-17 05:09:39
刚学spark,想写一个在pyspark操作spark sql的练习,
代码如下:
from pyspark.sql.types import *
sc = SparkContext.getOrCreate()
from pyspark.sql import SQLContext,Row
hvacText = sc.textFile("/home/spark/aaa.csv", use_unicode=False)
hvacSchema = StructType([StructField("date", StringType(), True),StructField("a1", FloatType(), True),StructField("a2", FloatType(), True),StructField("a3", FloatType(), True),StructField("a4", FloatType(), True),StructField("a5", FloatType(), True),StructField("a6", FloatType(), True),StructField("a7", FloatType(), True),StructField("a8", FloatType(), True),StructField("a9", FloatType(), True),StructField("a10", FloatType(), True),StructField("abc", StringType(), True)])
ccpart = hvacText.map(lambda le:le.split(","))
hvac = ccpart.map(lambda p:Row(date=p[0],a1=p[1],a2=p[2],a3=p[3],a4=p[4],a5=p[5],a6=p[6],a7=p[7],a8=p[8],a9=p[9],a10=p[10],abc=p[11]))
sqlContext = SQLContext(sc)

hvacdf = sqlContext.createDataFrame(hvac,hvacSchema)
hvacdf.registerTempTable("hvac")

xx=sqlContext.sql(" SELECT * FROM hvac WHERE abc='yyy' ")
xx.show()

数据如下:
aaa.csv
2015/1/1,,6,8,24,13,13,18,10,10,27,yyy
2015/1/2,11,15,14,13,9,10,19,13,14,13,ddd
2015/1/3,10,8,12,13,8,3,7,11,10,9,ccc
2015/1/4,9,6,6,3,10,9,9,13,14,13,eee

=============================================
报错如下:
>>> xx.show()
17/08/17 03:39:33 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/opt/spark/spark-2.2.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 177, in main
process()
File "/opt/spark/spark-2.2.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 172, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/opt/spark/spark-2.2.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", line 268, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/opt/spark/spark-2.2.0-bin-hadoop2.7/python/pyspark/sql/session.py", line 520, in prepare
verify_func(obj, schema)
File "/opt/spark/spark-2.2.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/types.py", line 1354, in _verify_type
_verify_type(obj[f.name], f.dataType, f.nullable)
File "/opt/spark/spark-2.2.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/types.py", line 1324, in _verify_type
raise TypeError("%s can not accept object %r in type %s" % (dataType, obj, type(obj)))
TypeError: FloatType can not accept object '' in type <type 'str'>

at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)

请帮忙看下怎么解决,谢谢!

...全文
2031 回复 打赏 收藏 转发到动态 举报
写回复
用AI写文章
回复
切换为时间正序
请发表友善的回复…
发表回复

1,261

社区成员

发帖
与我相关
我的任务
社区描述
Spark由Scala写成,是UC Berkeley AMP lab所开源的类Hadoop MapReduce的通用的并行计算框架,Spark基于MapReduce算法实现的分布式计算。
社区管理员
  • Spark
  • shiter
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧