quiz image

PySpark select() and collect() functions

EnrapturedElf avatar
EnrapturedElf
·
·
Download

Start Quiz

Study Flashcards

44 Questions

What does the DataFrame.fillna() function do in PySpark?

Replace NULL/None values with specified constant literal values

What is the purpose of the PySpark pivot() function?

Rotate/transpose data from one column into multiple DataFrame columns

How is the partitionBy() function in PySpark utilized?

Partition a large dataset into smaller files based on one or multiple columns

What data type does MapType in PySpark represent?

Python Dictionary (dict)

In PySpark, what is the main purpose of DataFrameNaFunctions.fill()?

To replace NULL/None values on DataFrame columns with constant values

What action does the foreach() function perform in PySpark?

Execute the input function on each element of an RDD

What does the PySpark select() function do?

Selects specific columns from a DataFrame

What is the purpose of the PySpark collect() operation?

Retrieves all elements of the dataset to the driver node

What happens when retrieving larger datasets with PySpark collect()?

An OutOfMemory error occurs

What is the purpose of the PySpark withColumn() function?

Changes the value or converts the datatype of an existing column

How can you rename a DataFrame column in PySpark?

Using the withColumnRenamed() function

What does the PySpark filter() function do?

Filters the rows from a DataFrame based on a given condition

Which PySpark transformation function is used to remove duplicate rows from a DataFrame based on selected columns?

dropDuplicates()

Which PySpark function is used to sort DataFrame by ascending or descending order based on single or multiple columns?

orderBy()

What is the purpose of PySpark groupBy() function?

To perform computations on each group of data.

Which PySpark transformation is used to combine two DataFrames based on a common key similar to SQL JOIN?

join()

Which PySpark transformation is used to merge two DataFrames with different schemas based on column names?

unionByName()

What is a UDF in PySpark?

User Defined Function

Which PySpark function is used to chain custom transformations on a DataFrame?

transform()

Which PySpark function is used to apply a transformation function on every element of a DataFrame and returns a new RDD?

map()

Which PySpark transformation operation is used to flatten the DataFrame after applying a function on every element?

flatMap()

What is the purpose of the PySpark foreach() operation?

To iterate over each element in the DataFrame.

In PySpark, the MapType data type is used to represent a Python tuple.

False

The PySpark partitionBy() function can partition a large dataset into smaller files based on multiple columns.

True

The PySpark pivot() function transposes data from multiple columns into a single column.

False

PySpark's foreach() function returns a new RDD after applying a transformation function on each element of the input RDD.

False

The PySpark fillna() function can replace NULL/None values with a custom constant literal value.

True

PySpark MapType comprises four fields: keyType, valueType, valueContainsNull, and keyContainsNull.

False

The PySpark withColumn() function can be used to rename columns in a DataFrame.

False

The PySpark select() function can only be used to select a single column from a DataFrame.

False

Calling the collect() function in PySpark always results in an OutOfMemoryError for large datasets.

True

The PySpark filter() function and where() clause operate differently based on the given condition.

False

By default, the PySpark filter() function returns a new DataFrame with all the rows that meet the specified condition.

True

The PySpark withColumnRenamed() function can only rename one DataFrame column at a time.

False

In PySpark, the distinct() transformation function is used to sort a DataFrame by ascending or descending order based on single or multiple columns.

False

PySpark Joins support all basic join types available in traditional SQL, such as INNER, LEFT OUTER, RIGHT OUTER, LEFT ANTI, LEFT SEMI, CROSS, and SELF JOIN.

True

In PySpark, the DataFrameNaFunctions.fill() function replaces null values in DataFrame columns with specified scalar values.

True

PySpark map() is an action operation that returns a new RDD by applying a transformation function on every element of the RDD.

False

PySpark's distinct() function returns the first occurrence of a duplicate row, thus preserving the original order of the DataFrame.

False

PySpark's unionByName() transformation can be used to merge two DataFrames with a different number of columns, given that allowMissingColumns parameter is set to True.

True

PySpark's flatMap() transformation operation performs a function and returns a new RDD/DataFrame without flattening the array or map-type DataFrame columns.

False

PySpark's groupBy() function is used to perform count, sum, average, minimum, and maximum functions on aggregated data.

True

PySpark's transform() function is an action operation that chains custom transformations and returns a new DataFrame.

False

PySpark's UDF feature is used to extend the built-in capabilities of Spark SQL & DataFrame and allows users to create their own custom functions for specific use-cases.

True

Study Notes

PySpark Functions

foreach()

  • An action operation that iterates over each element in a DataFrame or RDD
  • Executes a function on each element without returning a value
  • Similar to a for loop with advanced concepts

Data Manipulation

fillna()

  • Replaces NULL/None values in DataFrame columns with specified values (e.g., zero, empty string, space)
  • Can be used with multiple columns

pivot()

  • Rotates data from one column into multiple columns and back using unpivot()
  • An aggregation function that transposes values from one column into distinct columns

partitionBy()

  • Divides a large dataset (DataFrame) into smaller files based on one or multiple columns
  • Used when writing to disk

MapType

  • A data type to represent Python dictionaries (dict) and store key-value pairs
  • Comprises three fields: keyType (DataType), valueType (DataType), and valueContainsNull (BooleanType)

select()

  • Selects single, multiple, or all columns from a DataFrame
  • Returns a new DataFrame with selected columns
  • Can be used with column indices or nested columns

collect()

  • An action operation that retrieves all elements of a dataset (from all nodes) to the driver node
  • Should be used with smaller datasets after filtering or grouping to avoid OutOfMemory errors

withColumn()

  • Changes values, converts data types, creates new columns, and more
  • Examples include renaming columns, creating new columns, and applying functions

Filtering and Sorting

filter()

  • Filters rows from RDD/DataFrame based on a condition or SQL expression
  • Returns a new DataFrame or RDD with only the rows that meet the condition
  • Can be used with the where() clause

distinct() and dropDuplicates()

  • Remove duplicate rows (all columns) from DataFrame or drop rows based on selected columns
  • Return a new DataFrame

sort() and orderBy()

  • Sorts DataFrame by ascending or descending order based on single or multiple columns
  • Can also be done using PySpark SQL sorting functions

Grouping and Joining

groupBy()

  • Collects identical data into groups on DataFrame and performs count, sum, avg, min, max functions on the grouped data
  • Similar to SQL GROUP BY clause

join()

  • Combines two DataFrames and supports various join types (e.g., INNER, LEFT OUTER, RIGHT OUTER)
  • Involves data shuffling across the network

union() and unionAll()

  • Merge two or more DataFrames of the same schema or structure
  • Can be used with PySpark's unionByName() function, which takes an allowMissingColumns parameter

User-Defined Functions (UDF)

  • Extend PySpark's built-in capabilities
  • Can be created and used with DataFrame select(), withColumn(), and SQL
  • Allow custom functions to be applied to columns

Transformations

transform()

  • Chains custom transformations and returns a new DataFrame
  • Used to apply functions to columns

map()

  • Applies a transformation function (lambda) to every element of RDD/DataFrame
  • Returns a new RDD

flatMap()

  • Flattens RDD/DataFrame (array/map DataFrame columns) after applying a function to every element
  • Returns a new PySpark RDD/DataFrame

sample()

  • Retrieves a random sampling subset from a large dataset
  • Offers multiple methods (e.g., DataFrame.sample(), RDD.sample(), RDD.takeSample())

Learn about PySpark's select() function used for selecting single, multiple, or nested columns from a DataFrame and how collect() is used to retrieve all elements from the dataset to the driver node. Use collect() on smaller datasets typically after operations like filter(), group(), etc.

Make Your Own Quizzes and Flashcards

Convert your notes into interactive study material.

Get started for free
Use Quizgecko on...
Browser
Browser