PySpark : Harnessing the Power of PySparks foldByKey[aggregate data by keys using a given function]

PySpark @

In this article, we will explore the foldByKey transformation in PySpark. foldByKey is an essential tool when working with Key-Value pair RDDs (Resilient Distributed Datasets), as it allows developers to aggregate data by keys using a given function. We will discuss the syntax, usage, and provide a concrete example with hardcoded values instead of reading from a file.

What is foldByKey?

foldByKey is a transformation operation in PySpark that enables the aggregation of values for each key in a Key-Value pair RDD. This operation takes two arguments: the initial zero value and the function to perform the aggregation. It applies the aggregation function cumulatively to the values of each key, starting with the initial zero value.


The syntax for the foldByKey function is as follows:

foldByKey(zeroValue, func)


  • zeroValue: The initial value used for the aggregation (commonly known as the zero value)
  • func: The function that will be used to aggregate the values for each key


Let’s dive into an example to better understand the usage of foldByKey. Suppose we have a dataset containing sales data for a chain of stores. The data includes store ID, product ID, and the number of units sold. Our goal is to calculate the total units sold for each store.

#Harnessing the Power of PySpark: A Deep Dive into foldByKey @
from pyspark import SparkContext
# Initialize the Spark context
sc = SparkContext("local", "foldByKey @ ")
# Sample sales data as (store_id, (product_id, units_sold))
sales_data = [
    (1, (189876, 5)),
    (2, (189876, 7)),
    (1, (267434, 3)),
    (2, (267434, 10)),
    (3, (189876, 4)),
    (3, (267434, 6)),

# Create the RDD from the sales_data list
sales_rdd = sc.parallelize(sales_data)

# Define the aggregation function
def sum_units(a, b):
    return a + b[1]

# Perform the foldByKey operation
total_sales_rdd = sales_rdd.foldByKey(0, sum_units)

# Collect the results and print
for store_id, total_units in total_sales_rdd.collect():
    print(f"Store {store_id} sold a total of {total_units} units.")


Store 1 sold a total of 8 units.
Store 2 sold a total of 17 units.
Store 3 sold a total of 10 units.

Here we have explored the foldByKey transformation in PySpark. This powerful function allows developers to perform aggregations on Key-Value pair RDDs efficiently. We covered the syntax, usage, and provided an example using hardcoded values. By leveraging foldByKey, you can simplify and optimize your data processing tasks in PySpark, making it an essential tool in your Big Data toolkit.

Spark important urls to refer

  1. Spark Examples
  2. PySpark Blogs
  3. Bigdata Blogs
  4. Spark Interview Questions
  5. Official Page
Author: user

Leave a Reply