Involved Source Files
Package csv reads and writes comma-separated values (CSV) files.
There are many kinds of CSV files; this package supports the format
described in RFC 4180.
A csv file contains zero or more records of one or more fields per record.
Each record is separated by the newline character. The final record may
optionally be followed by a newline character.
field1,field2,field3
White space is considered part of a field.
Carriage returns before newline characters are silently removed.
Blank lines are ignored. A line with only whitespace characters (excluding
the ending newline character) is not considered a blank line.
Fields which start and stop with the quote character " are called
quoted-fields. The beginning and ending quote are not part of the
field.
The source:
normal string,"quoted-field"
results in the fields
{`normal string`, `quoted-field`}
Within a quoted-field a quote character followed by a second quote
character is considered a single quote.
"the ""word"" is true","a ""quoted-field"""
results in
{`the "word" is true`, `a "quoted-field"`}
Newlines and commas may be included in a quoted-field
"Multi-line
field","comma is ,"
results in
{`Multi-line
field`, `comma is ,`}
writer.go
Code Examples
package main
import (
"encoding/csv"
"fmt"
"io"
"log"
"strings"
)
func main() {
in := `first_name,last_name,username
"Rob","Pike",rob
Ken,Thompson,ken
"Robert","Griesemer","gri"
`
r := csv.NewReader(strings.NewReader(in))
for {
record, err := r.Read()
if err == io.EOF {
break
}
if err != nil {
log.Fatal(err)
}
fmt.Println(record)
}
}
package main
import (
"encoding/csv"
"fmt"
"log"
"strings"
)
func main() {
in := `first_name,last_name,username
"Rob","Pike",rob
Ken,Thompson,ken
"Robert","Griesemer","gri"
`
r := csv.NewReader(strings.NewReader(in))
records, err := r.ReadAll()
if err != nil {
log.Fatal(err)
}
fmt.Print(records)
}
package main
import (
"encoding/csv"
"fmt"
"log"
"strings"
)
func main() {
in := `first_name;last_name;username
"Rob";"Pike";rob
# lines beginning with a # character are ignored
Ken;Thompson;ken
"Robert";"Griesemer";"gri"
`
r := csv.NewReader(strings.NewReader(in))
r.Comma = ';'
r.Comment = '#'
records, err := r.ReadAll()
if err != nil {
log.Fatal(err)
}
fmt.Print(records)
}
package main
import (
"encoding/csv"
"log"
"os"
)
func main() {
records := [][]string{
{"first_name", "last_name", "username"},
{"Rob", "Pike", "rob"},
{"Ken", "Thompson", "ken"},
{"Robert", "Griesemer", "gri"},
}
w := csv.NewWriter(os.Stdout)
for _, record := range records {
if err := w.Write(record); err != nil {
log.Fatalln("error writing record to csv:", err)
}
}
// Write any buffered data to the underlying writer (standard output).
w.Flush()
if err := w.Error(); err != nil {
log.Fatal(err)
}
}
package main
import (
"encoding/csv"
"log"
"os"
)
func main() {
records := [][]string{
{"first_name", "last_name", "username"},
{"Rob", "Pike", "rob"},
{"Ken", "Thompson", "ken"},
{"Robert", "Griesemer", "gri"},
}
w := csv.NewWriter(os.Stdout)
w.WriteAll(records) // calls Flush internally
if err := w.Error(); err != nil {
log.Fatalln("error writing csv:", err)
}
}
Package-Level Type Names (total 3, all are exported)
/* sort exporteds by: | */
A ParseError is returned for parsing errors.
Line numbers are 1-indexed and columns are 0-indexed.
// Column (rune index) where the error occurred
// The actual error
// Line where the error occurred
// Line where the record starts
(*T) Error() string(*T) Unwrap() error
*T : error
A Reader reads records from a CSV-encoded file.
As returned by NewReader, a Reader expects input conforming to RFC 4180.
The exported fields can be changed to customize the details before the
first call to Read or ReadAll.
The Reader converts all \r\n sequences in its input to plain \n,
including in multiline field values, so that the returned data does
not depend on which line-ending convention an input file uses.
Comma is the field delimiter.
It is set to comma (',') by NewReader.
Comma must be a valid rune and must not be \r, \n,
or the Unicode replacement character (0xFFFD).
Comment, if not 0, is the comment character. Lines beginning with the
Comment character without preceding whitespace are ignored.
With leading whitespace the Comment character becomes part of the
field, even if TrimLeadingSpace is true.
Comment must be a valid rune and must not be \r, \n,
or the Unicode replacement character (0xFFFD).
It must also not be equal to Comma.
FieldsPerRecord is the number of expected fields per record.
If FieldsPerRecord is positive, Read requires each record to
have the given number of fields. If FieldsPerRecord is 0, Read sets it to
the number of fields in the first record, so that future records must
have the same field count. If FieldsPerRecord is negative, no check is
made and records may have a variable number of fields.
If LazyQuotes is true, a quote may appear in an unquoted field and a
non-doubled quote may appear in a quoted field.
ReuseRecord controls whether calls to Read may return a slice sharing
the backing array of the previous call's returned slice for performance.
By default, each call to Read returns newly allocated memory owned by the caller.
// Deprecated: No longer used.
If TrimLeadingSpace is true, leading white space in a field is ignored.
This is done even if the field delimiter, Comma, is white space.
fieldIndexes is an index of fields inside recordBuffer.
The i'th field ends at offset fieldIndexes[i] in recordBuffer.
lastRecord is a record cache and only used when ReuseRecord == true.
numLine is the current line being read in the CSV file.
r*bufio.Reader
rawBuffer is a line buffer only used by the readLine method.
recordBuffer holds the unescaped fields, one after another.
The fields can be accessed by using the indexes in fieldIndexes.
E.g., For the row `a,"b","c""d",e`, recordBuffer will contain `abc"de`
and fieldIndexes will contain the indexes [1, 2, 5, 6].
Read reads one record (a slice of fields) from r.
If the record has an unexpected number of fields,
Read returns the record along with the error ErrFieldCount.
Except for that case, Read always returns either a non-nil
record or a non-nil error, but not both.
If there is no data left to be read, Read returns nil, io.EOF.
If ReuseRecord is true, the returned slice may be shared
between multiple calls to Read.
ReadAll reads all the remaining records from r.
Each record is a slice of fields.
A successful call returns err == nil, not err == io.EOF. Because ReadAll is
defined to read until EOF, it does not treat end of file as an error to be
reported.
readLine reads the next line (with the trailing endline).
If EOF is hit without a trailing endline, it will be omitted.
If some bytes were read, then the error is never io.EOF.
The result is only valid until the next call to readLine.
(*T) readRecord(dst []string) ([]string, error)
func NewReader(r io.Reader) *Reader
A Writer writes records using CSV encoding.
As returned by NewWriter, a Writer writes records terminated by a
newline and uses ',' as the field delimiter. The exported fields can be
changed to customize the details before the first call to Write or WriteAll.
Comma is the field delimiter.
If UseCRLF is true, the Writer ends each output line with \r\n instead of \n.
The writes of individual records are buffered.
After all data has been written, the client should call the
Flush method to guarantee all data has been forwarded to
the underlying io.Writer. Any errors that occurred should
be checked by calling the Error method.
// Field delimiter (set to ',' by NewWriter)
// True to use \r\n as the line terminator
w*bufio.Writer
Error reports any error that has occurred during a previous Write or Flush.
Flush writes any buffered data to the underlying io.Writer.
To check if an error occurred during the Flush, call Error.
Write writes a single CSV record to w along with any necessary quoting.
A record is a slice of strings with each string being one field.
Writes are buffered, so Flush must eventually be called to ensure
that the record is written to the underlying io.Writer.
WriteAll writes multiple CSV records to w using Write and then calls Flush,
returning any error from the Flush.
fieldNeedsQuotes reports whether our field must be enclosed in quotes.
Fields with a Comma, fields with a quote or newline, and
fields which start with a space must be enclosed in quotes.
We used to quote empty strings, but we do not anymore (as of Go 1.4).
The two representations should be equivalent, but Postgres distinguishes
quoted vs non-quoted empty string during database imports, and it has
an option to force the quoted behavior for non-quoted CSV but it has
no option to force the non-quoted behavior for quoted CSV, making
CSV with quoted empty strings strictly less useful.
Not quoting the empty string also makes this package match the behavior
of Microsoft Excel and Google Drive.
For Postgres, quote the data terminating string `\.`.
*T : net/http.Flusher
func NewWriter(w io.Writer) *Writer
Package-Level Functions (total 5, in which 2 are exported)
NewReader returns a new Reader that reads from r.
NewWriter returns a new Writer that writes to w.
lengthNL reports the number of bytes for the trailing \n.
nextRune returns the next rune in b or utf8.RuneError.
The pages are generated with Goldsv0.3.2. (GOOS=linux GOARCH=amd64)
Golds is a Go 101 project developed by Tapir Liu.
PR and bug reports are welcome and can be submitted to the issue list.
Please follow @Go100and1 (reachable from the left QR code) to get the latest news of Golds.