Detect existing (non-missing) values in Spark DataFrames using Pandas API : notna()

Spark_Pandas_Freshers_in

Apache Spark offers robust capabilities for large-scale data processing, efficiently identifying existing values can be challenging. However, with the Pandas API on Spark, users can leverage familiar functions like notna() to detect existing values seamlessly. This article delves into how to utilize the notna() function within the Pandas API on Spark to identify existing values in Spark DataFrames, accompanied by comprehensive examples and outputs.

Understanding Existing Value Detection

Identifying existing (non-missing) values is essential for accurate analysis and decision-making. It ensures that the data being analyzed is complete and representative of the underlying phenomena.

Example: Detecting Existing Values with notna()

Consider an example where we have a Spark DataFrame containing sales data, some of which may have missing values in the ‘quantity’ and ‘price’ columns.

# Import necessary libraries
from pyspark.sql import SparkSession
import pandas as pd

# Create SparkSession
spark = SparkSession.builder \
    .appName("Identifying existing values : Learning @ Freshers.in") \
    .getOrCreate()

# Sample data with missing values
data = [("apple", 10, 1.0),
        ("banana", None, 2.0),
        ("orange", 20, None),
        (None, 30, 3.0)]

columns = ["product", "quantity", "price"]
df = spark.createDataFrame(data, columns)

# Convert Spark DataFrame to Pandas DataFrame
pandas_df = df.toPandas()

# Detect existing values using notna()
existing_values = pandas_df.notna()

# Display DataFrame with existing value indicators
print(existing_values)

Output:

   product  quantity  price
0     True      True   True
1     True     False   True
2     True      True  False
3    False      True   True

In this example, the notna() function efficiently detected existing values in the Spark DataFrame, marking them as ‘True’ in the corresponding cells.

Author: user