Exploring Data Sampling in PySpark: Techniques and Best Practices

PySpark @ Freshers.in

In the realm of big data, PySpark has become an essential tool for data processing and analysis. One of its key features is the ability to perform data sampling, which is crucial for handling large datasets efficiently. This article provides a comprehensive overview of sampling in PySpark, including its significance, methods, and practical applications.

What is Sampling in PySpark?

Sampling in PySpark refers to the process of selecting a subset of data from a larger dataset. It’s a statistical method used to approximate and understand the properties of a large dataset by examining a smaller, manageable part of it.

Importance of Sampling in Big Data Analysis

  • Efficiency: Sampling reduces the volume of data, making it more manageable and quicker to process.
  • Cost-effective: It lowers the computational cost, especially important when working with vast datasets.
  • Feasibility: Sampling makes it possible to analyze large datasets on machines with limited resources.

PySpark’s Sampling Methods

PySpark offers various functions for sampling data, each suited for different requirements and scenarios.

Simple Random Sampling

This method involves randomly selecting data points from the dataset, ensuring each data point has an equal chance of being chosen.

Stratified Sampling

Stratified sampling involves dividing the dataset into smaller groups, or strata, and then sampling from each group. This method ensures representation from all parts of the dataset.

Systematic Sampling

Systematic sampling selects data points at regular intervals. It’s less random but can be more efficient in certain scenarios.

Example: Implementing Sampling in PySpark

Let’s apply PySpark’s sampling methods to a real dataset. Assume we have a dataset of individuals with their names and scores.

Sample Dataset

Name Score
Sachin 85
Manju 90
Ram 75
Raju 88
David 92
Freshers_in 78
Wilson 80

Creating a DataFrame

Create a DataFrame with the sample data.

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('SamplingExample').getOrCreate()
from pyspark.sql import Row
data = [Row(name='Sachin', score=85),
        Row(name='Manju', score=90),
        Row(name='Ram', score=75),
        Row(name='Raju', score=88),
        Row(name='David', score=92),
        Row(name='Freshers_in', score=78),
        Row(name='Wilson', score=80)]
df = spark.createDataFrame(data)

Applying Sampling Techniques

Now, let’s apply different sampling techniques to this DataFrame.

Simple Random Sampling

sampled_df = df.sample(False, 0.5)

We need to create a new column in the DataFrame that categorizes each score into a range. We can define a function to assign a range based on the score, and then apply this function to the DataFrame.

from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
# Define a UDF to categorize scores into ranges
def score_range(score):
    if 75 <= score <= 85:
        return '75-85'
    elif 86 <= score <= 95:
        return '86-95'
        return 'Other'
# Register UDF
score_range_udf = udf(score_range, StringType())
# Add a new column with score ranges
df_with_range = df.withColumn("ScoreRange", score_range_udf("score"))
Stratified Sampling

Assuming we want to stratify by score range:

# Define fractions for each stratum
fractions = {'75-85': 0.5, '86-95': 0.5}
# Perform stratified sampling
stratified_sample = df_with_range.stat.sampleBy("ScoreRange", fractions, seed=0)

These commands will produce subsets of the original dataset. Analyzing these samples can give insights comparable to those from the full dataset but with reduced computational effort.


|  name|score|
|Sachin|   85|
|  Raju|   88|
|Wilson|   80|

|       name|score|ScoreRange|
|     Sachin|   85|     75-85|
|      Manju|   90|     86-95|
|        Ram|   75|     75-85|
|       Raju|   88|     86-95|
|      David|   92|     86-95|
|Freshers_in|   78|     75-85|
|     Wilson|   80|     75-85|

|       name|score|ScoreRange|
|        Ram|   75|     75-85|
|Freshers_in|   78|     75-85|
|     Wilson|   80|     75-85|

Spark important urls to refer

  1. Spark Examples
  2. PySpark Blogs
  3. Bigdata Blogs
  4. Spark Interview Questions
  5. Official Page
Author: user