Spark has a binaryFile in-built format to load any Binary file and store the content as binary. The BLOB or binary content can be later written back to appropriate file format as per requirement.

Lets read some binary files quickly for demonstration. Files we are going to read

%%sh
ls -lhtr dataset/files
None
Check files available

Lets read one .png file to check the output of the data frame

# Lets read a .png file
df_spark_png = spark \
    .read \
    .format("binaryFile") \
    .load("dataset/files/spark.png")
df_spark_png.printSchema()
df_spark_png.show()
None
PNG file as binary

We read all .png files from path

# Lets read all .png file
df_spark_png = spark \
    .read \
    .format("binaryFile") \
    .load("dataset/files/*.png")
df_spark_png.printSchema()
df_spark_png.show()
None
All PNG files in path

Can we read a PDF file? Yes

# We can even read PDF files
df_spark_pdf = spark \
    .read \
    .format("binaryFile") \
    .load("dataset/files/*.pdf")
df_spark_pdf.printSchema()
df_spark_pdf.show()
None
PDF file as binary

Can we read a TXT file as Binary ? Yes

# We can even read Text files as binary files
df_spark_txt = spark \
    .read \
    .format("binaryFile") \
    .load("dataset/example.txt")
df_spark_txt.printSchema()
df_spark_txt.show()
None
TXT file as Binary

So, now can we write back the files from binary content ? Yes

# Lets generate the text file back from the binary content
byte_content = df_spark_txt.select("content").collect()[0][0]
# Lets write the byte content as file back
with open("dataset/new_example.txt", "wb") as f:
    f.write(byte_content)
    f.close()
None
Binary to TXT file

As demonstrated, Spark can read any file as binary for storage. Later we can write the binary content back to respective file format as per usage.

Check out the iPython Notebook on Github — https://github.com/subhamkharwal/ease-with-apache-spark/blob/master/13_binary_files.ipynb

Check out PySpark Series on Medium — https://subhamkharwal.medium.com/learnbigdata101-spark-series-940160ff4d30