In this tutorial, you will learn "How to Transpose or Pivot | Rows to Columns in Dataframe by using Scala" in Databricks.
Data integrity refers to the quality, consistency, and reliability of data throughout its life cycle. Data engineering pipelines are methods and structures that collect, transform, store, and analyse data from many sources.
Scala is a computer language that combines the object-oriented and functional programming paradigms. Martin Odersky invented it, and it was initially made available in 2003. "Scala" is an abbreviation for "scalable language," signifying the language's capacity to grow from simple scripts to complex systems.
Scala is a language designed to be productive, expressive, and compact that can be used for a variety of tasks, from large-scale corporate applications to scripting. It has become more well-liked in sectors like banking, where its robust type system and expressive syntax are very helpful.
//import libraries
import org.apache.spark.sql.{SparkSession, Row}
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
// Create Spark Session
val spark = SparkSession.builder().appName("TransposeRows").getOrCreate()
// file path
val FilePath="dbfs:/FileStore/tables/StoreSales.csv"
// read data into dataframe from file
val df=spark.read.option("header","true").csv(FilePath)
// show dataframe schema
df.printSchema()
π Create new Dataframe by applying the transpose or pivot logics as given below -
Please watch our demo video at YouTube-
To learn more, please follow us -
π http://www.sql-datatools.com
To Learn more, please visit our YouTube channel at —
π http://www.youtube.com/c/Sql-datatools
To Learn more, please visit our Instagram account at -
π https://www.instagram.com/asp.mukesh/
To Learn more, please visit our twitter account at -
π https://twitter.com/macxima
No comments:
Post a Comment