Advent of 2021, Day 8 – Creating RDD files
This article is originally published at https://tomaztsql.wordpress.com
Series of Apache Spark posts:
- Dec 01: What is Apache Spark
- Dec 02: Installing Apache Spark
- Dec 03: Getting around CLI and WEB UI in Apache Spark
- Dec 04: Spark Architecture – Local and cluster mode
- Dec 05: Setting up Spark Cluster
- Dec 06: Setting up IDE
- Dec 07: Starting Spark with R and Python
Spark is created around the concept of resilient distributed datasets (RDD). RDD is a fault-tolerant collection of files that can be used in parallel. RDDs can be created in two ways:
– parallelising an existing data collection in driver program
– referencing a datasets in external storage (HDFS, blob storage, shared filesystem, Hadoop InputFormat,…)
In a simple way, Spark RDD has two opeartions:
– transformations – creating a new RDD dataset on top of already existing one with the last transformation
– actions – to the action, and return a value to the driver program after running a computation on the dataset.
Map and reduce is where a map
is a transformation that uses a function on each dataset and returns a new RDD file that holds the result. Reduce
is a action that aggregates all the elements of RDD using a function and returns the result from last created RDD to driver program.
By default, each transformed of the RDD can be recomputed each time you run an action on it. But you can also persist
RDD in the memory, and makes faster access to data, because it is cached.
Using Python, we can run the cluster and start tinkering with a simple file (using my Advent of Code input data puzzle for day 7 , because )
from pyspark.sql import SparkSession
spark:SparkSession = SparkSession.builder()
.master("local[1]")
.appName("UsingAoCData")
.getOrCreate()
And we prepare for parallelisation of the RDD:
data = [1,2,3,4,5,6,7,8,9,10,11,12]
rdd=spark.sparkContext.parallelize(data)
These files can be created on many different platforms (HDFS, local file,…) and will still have the same characteristics.
Furthermore, you can also create an empty RDD file and later populate it, you can partition the RDD files, create the whole text files and many other options.
When you use the methods: parallelize()
, textFile()
or wholeTextFiles()
methods to store data into RDD, these will be automatically split into partitions (with limitations of resources available). Number of partitions – as we have already discussed – will be based upon the number of cores available in the system.
Tomorrow we will look into RDD operations (transformations and actions)
Compete set of code, documents, notebooks, and all of the materials will be available at the Github repository: https://github.com/tomaztk/Spark-for-data-engineers
Happy Spark Advent of 2021!
Thanks for visiting r-craft.org
This article is originally published at https://tomaztsql.wordpress.com
Please visit source website for post related comments.