As the number of fields is growing in each industry, in each Data sources. It is almost impossible to store all the variables in single Data table. So ideally we received Data tables in multiple files. In these situation, whenever there is a need to bring variables together in one table, merge or join is used. Right join is one type of join, where for all the rows present in Table 2, corresponding variables are pulled from table 1. The below article discusses how to right join Dataframes in Pyspark.
Amy has two Dataframes, Customer Data 1 with 10 observation. This Data has Customer ID, First Name, Last Name and Gender. Customer ID is the primary key. Customer Data 2 has 12 observation. This Data has Customer ID as primary key, First Name, Last Name, Country Name and Total Spend in an year. Amy wants to get information of Gender for all the customers in Customer Data 2. i.e. She is looking forward to create another Data frame where for each record of Customer Data 2 she also gets information of Gender from Customer Data 1.
Below are the key steps to follow to Right Join Pyspark Dataframe:
- Step 1: Import all the necessary modules.
import pandas as pd import findspark findspark.init() import pyspark from pyspark import SparkContext from pyspark.sql import SQLContext sc = SparkContext("local", "App Name") sql = SQLContext(sc)
- Step 2: Use join function from Pyspark module to merge dataframes. To do the right join, “right_outer” parameter helps. Further for defining the column which will be used as a key for joining the two Dataframes, “Table 1 key” = “Table 2 key” helps.
Merged_Data=Customer_Data_2.join(Customer_Data_1,\ Customer_Data_1.ID == Customer_Data_2.ID,"right_outer")
- Step 3: Check the output data quality to assess the observations in final Dataframe. Please note that as the Customer Data 2 has 12 observations, so the final Dataframe also has 12 observation. Use show() command to show top rows in Pyspark Dataframe.
Merged_Data.show() #Print Shape of the file, i.e. number of rows and number of columns print((Merged_Data.count(), len(Merged_Data.columns)))
To get top certifications in Pyspark and build your resume visit here. Additionally, you can read books listed here to build strong knowledge around Pyspark.
Visit us below for video tutorial:
Looking to practice more with this example? Drop us a note, we will email you the Code file:
📬 Stay Ahead in Data Science & AI – Subscribe to Newsletter!
- 🎯 Interview Series: Curated questions and answers for freshers and experienced candidates.
- 📊 Data Science for All: Simplified articles on key concepts, accessible to all levels.
- 🤖 Generative AI for All: Easy explanations on Generative AI trends transforming industries.
💡 Why Subscribe? Gain expert insights, stay ahead of trends, and prepare with confidence for your next interview.
👉 Subscribe here: