The documentation for Pyspark shows DataFrames being constructed from sqlContext, sqlContext.read(), and a variety of other methods.
(See https://spark.apache.org/docs/1.6.2/api/python/pyspark.sql.html)
Is it possible to subclass Dataframe and instantiate it independently? I would like to add methods and functionality to the base DataFrame class.