Comma-Separated Values - CSV
Comma-Separated Values are used as interchange format for tabular data of text. This format is supported by most spreadsheet applications and may be used as database extraction format.
Despite the name the values are often separated by a semicolon ;
.
Even though the format is interpreted differently there exists a formal specification in RFC4180.
The format uses three different characters to structure the data:
- Field Delimiter - separates the columns from each other (e.g.
,
or;
) - Quote - marks columns that may contain other structuring characters (such as Field Delimiters or line break) (e.g.
"
) - Escape Character - used to escape Field Delimiters in columns (e.g.
\
)
Lines are separated by either Line Feed (\n
= ASCII 10) or Carriage Return and Line Feed (\r
= ASCII 13 + \n
= ASCII 10).
Artifacts
- sbt
-
libraryDependencies += "com.lightbend.akka" %% "akka-stream-alpakka-csv" % "0.9"
- Maven
-
<dependency> <groupId>com.lightbend.akka</groupId> <artifactId>akka-stream-alpakka-csv_2.12</artifactId> <version>0.9</version> </dependency>
- Gradle
-
dependencies { compile group: "com.lightbend.akka", name: "akka-stream-alpakka-csv_2.12", version: "0.9" }
CSV parsing
CSV parsing offers a flow that takes a stream of akka.util.ByteString
and issues a stream of lists of ByteString
.
The incoming data must contain line ends to allow line base framing. The CSV special characters can be specified (as bytes), suitable values are available as constants in CsvParsing
.
The current parser is limited to byte-based character sets (UTF-8, ISO-8859-1, ASCII) and can’t parse double-byte encodings (e.g. UTF-16).
The parser accepts Byte Order Mark (BOM) for UTF-8, but will fail for UTF-16 and UTF-32 Byte Order Marks.
- Scala
-
import akka.stream.alpakka.csv.scaladsl.CsvParsing val flow: Flow[ByteString, List[ByteString], NotUsed] = CsvParsing.lineScanner(delimiter, quoteChar, escapeChar)
- Java
-
Flow<ByteString, Collection<ByteString>, NotUsed> flow = CsvParsing.lineScanner(delimiter, quoteChar, escapeChar);
In this sample we read a single line of CSV formatted data into a list of column elements:
- Scala
-
import akka.stream.alpakka.csv.scaladsl.CsvParsing Source.single(ByteString("eins,zwei,drei\n")) .via(CsvParsing.lineScanner()) .runWith(Sink.head)
- Java
-
Source.single(ByteString.fromString("eins,zwei,drei\n")) .via(CsvParsing.lineScanner()) .runWith(Sink.head(), materializer);
CSV conversion into a map
The column-based nature of CSV files can be used to read it into a map of column names and their ByteStrng
values. The column names can be either provided in code or the first line of data can be interpreted as the column names.
- Scala
-
import akka.stream.alpakka.csv.scaladsl.CsvToMap val flow1: Flow[List[ByteString], Map[String, ByteString], NotUsed] = CsvToMap.toMap() val flow2: Flow[List[ByteString], Map[String, ByteString], NotUsed] = CsvToMap.toMap(StandardCharsets.UTF_8) val flow3: Flow[List[ByteString], Map[String, ByteString], NotUsed] = CsvToMap.withHeaders("column1", "column2", "column3")
- Java
-
Flow<Collection<ByteString>, Map<String, ByteString>, ?> flow1 = CsvToMap.toMap(); Flow<Collection<ByteString>, Map<String, ByteString>, ?> flow2 = CsvToMap.toMap(StandardCharsets.UTF_8); Flow<Collection<ByteString>, Map<String, ByteString>, ?> flow3 = CsvToMap.withHeaders("column1", "column2", "column3");
This example uses the first line in the CSV data as column names:
- Scala
-
import akka.stream.alpakka.csv.scaladsl.{CsvParsing, CsvToMap} Source .single(ByteString("""eins,zwei,drei |1,2,3""".stripMargin)) .via(CsvParsing.lineScanner()) .via(CsvToMap.toMap()) .runWith(Sink.head)
- Java
-
Source .single(ByteString.fromString("eins,zwei,drei\n1,2,3")) .via(CsvParsing.lineScanner()) .via(CsvToMap.toMap(StandardCharsets.UTF_8)) .runWith(Sink.head(), materializer);
This sample will generate the same output as above, but the column names are specified in the code:
- Scala
-
import akka.stream.alpakka.csv.scaladsl.{CsvParsing, CsvToMap} Source .single(ByteString("""1,2,3""")) .via(CsvParsing.lineScanner()) .via(CsvToMap.withHeaders("eins", "zwei", "drei")) .runWith(Sink.head)
- Java
-
Source .single(ByteString.fromString("1,2,3")) .via(CsvParsing.lineScanner()) .via(CsvToMap.withHeaders("eins", "zwei", "drei")) .runWith(Sink.head(), materializer);
CSV formatting
To emit CSV files immutable.Seq[String]
can be formatted into ByteString
e.g to be written to file. The formatter takes care of quoting and escaping.
Certain CSV readers (e.g. Microsoft Excel) require CSV files to indicate their character encoding with a Byte Order Mark (BOM) in the first bytes of the file. Choose an appropriate Byte Order Mark matching the selected character set from the constants in ByteOrderMark
(Unicode FAQ on Byte Order Mark).
- Scala
-
val flow: Flow[immutable.Seq[String], ByteString, _] = CsvFormatting.format(delimiter, quoteChar, escapeChar, endOfLine, CsvQuotingStyle.Required, charset = StandardCharsets.UTF_8, byteOrderMark = None)
- Java
-
Flow<Collection<String>, ByteString, ?> flow1 = CsvFormatting.format(); Flow<Collection<String>, ByteString, ?> flow2 = CsvFormatting.format(delimiter, quoteChar, escapeChar, endOfLine, CsvQuotingStyle.REQUIRED, charset, byteOrderMark);
This example uses the default configuration:
- Delimiter: comma (,)
- Quote char: double quote (")
- Escape char: backslash (\)
- Line ending: Carriage Return and Line Feed (
\r
= ASCII 13 +\n
= ASCII 10) - Quoting style: quote only if required
- Charset: UTF-8
- No Byte Order Mark
- Scala
-
import akka.stream.alpakka.csv.scaladsl.CsvFormatting Source .single(List("eins", "zwei", "drei")) .via(CsvFormatting.format()) .runWith(Sink.head)
- Java
-
Source.single(Arrays.asList("one", "two", "three", "four")) .via(CsvFormatting.format()) .runWith(Sink.head(), materializer);