pandas.read_table — pandas 1.2.5 documentation (2024)

filepath_or_bufferstr, path object or file-like object

Any valid string path is acceptable. The string could be a URL. ValidURL schemes include http, ftp, s3, gs, and file. For file URLs, a host isexpected. A local file could be: file://localhost/path/to/table.csv.

If you want to pass in a path object, pandas accepts any os.PathLike.

By file-like object, we refer to objects with a read() method, such asa file handle (e.g. via builtin open function) or StringIO.

sepstr, default ‘\t’ (tab-stop)

Delimiter to use. If sep is None, the C engine cannot automatically detectthe separator, but the Python parsing engine can, meaning the latter willbe used and automatically detect the separator by Python’s builtin sniffertool, csv.Sniffer. In addition, separators longer than 1 character anddifferent from '\s+' will be interpreted as regular expressions andwill also force the use of the Python parsing engine. Note that regexdelimiters are prone to ignoring quoted data. Regex example: '\r\t'.

delimiterstr, default None

Alias for sep.

headerint, list of int, default ‘infer’

Row number(s) to use as the column names, and the start of thedata. Default behavior is to infer the column names: if no namesare passed the behavior is identical to header=0 and columnnames are inferred from the first line of the file, if columnnames are passed explicitly then the behavior is identical toheader=None. Explicitly pass header=0 to be able toreplace existing names. The header can be a list of integers thatspecify row locations for a multi-index on the columnse.g. [0,1,3]. Intervening rows that are not specified will beskipped (e.g. 2 in this example is skipped). Note that thisparameter ignores commented lines and empty lines ifskip_blank_lines=True, so header=0 denotes the first line ofdata rather than the first line of the file.

namesarray-like, optional

List of column names to use. If the file contains a header row,then you should explicitly pass header=0 to override the column names.Duplicates in this list are not allowed.

index_colint, str, sequence of int / str, or False, default None

Column(s) to use as the row labels of the DataFrame, either given asstring name or column index. If a sequence of int / str is given, aMultiIndex is used.

Note: index_col=False can be used to force pandas to not use the firstcolumn as the index, e.g. when you have a malformed file with delimiters atthe end of each line.

usecolslist-like or callable, optional

Return a subset of the columns. If list-like, all elements must eitherbe positional (i.e. integer indices into the document columns) or stringsthat correspond to column names provided either by the user in names orinferred from the document header row(s). For example, a valid list-likeusecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].Element order is ignored, so usecols=[0, 1] is the same as [1, 0].To instantiate a DataFrame from data with element order preserved usepd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columnsin ['foo', 'bar'] order orpd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']]for ['bar', 'foo'] order.

If callable, the callable function will be evaluated against the columnnames, returning names where the callable function evaluates to True. Anexample of a valid callable argument would be lambda x: x.upper() in['AAA', 'BBB', 'DDD']. Using this parameter results in much fasterparsing time and lower memory usage.

squeezebool, default False

If the parsed data only contains one column then return a Series.

prefixstr, optional

Prefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, …

mangle_dupe_colsbool, default True

Duplicate columns will be specified as ‘X’, ‘X.1’, …’X.N’, rather than‘X’…’X’. Passing in False will cause data to be overwritten if thereare duplicate names in the columns.

dtypeType name or dict of column -> type, optional

Data type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32,‘c’: ‘Int64’}Use str or object together with suitable na_values settingsto preserve and not interpret dtype.If converters are specified, they will be applied INSTEADof dtype conversion.

engine{‘c’, ‘python’}, optional

Parser engine to use. The C engine is faster while the python engine iscurrently more feature-complete.

convertersdict, optional

Dict of functions for converting values in certain columns. Keys can eitherbe integers or column labels.

true_valueslist, optional

Values to consider as True.

false_valueslist, optional

Values to consider as False.

skipinitialspacebool, default False

Skip spaces after delimiter.

skiprowslist-like, int or callable, optional

Line numbers to skip (0-indexed) or number of lines to skip (int)at the start of the file.

If callable, the callable function will be evaluated against the rowindices, returning True if the row should be skipped and False otherwise.An example of a valid callable argument would be lambda x: x in [0, 2].

skipfooterint, default 0

Number of lines at bottom of file to skip (Unsupported with engine=’c’).

nrowsint, optional

Number of rows of file to read. Useful for reading pieces of large files.

na_valuesscalar, str, list-like, or dict, optional

Additional strings to recognize as NA/NaN. If dict passed, specificper-column NA values. By default the following values are interpreted asNaN: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’,‘1.#IND’, ‘1.#QNAN’, ‘<NA>’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’,‘nan’, ‘null’.

keep_default_nabool, default True

Whether or not to include the default NaN values when parsing the data.Depending on whether na_values is passed in, the behavior is as follows:

  • If keep_default_na is True, and na_values are specified, na_valuesis appended to the default NaN values used for parsing.

  • If keep_default_na is True, and na_values are not specified, onlythe default NaN values are used for parsing.

  • If keep_default_na is False, and na_values are specified, onlythe NaN values specified na_values are used for parsing.

  • If keep_default_na is False, and na_values are not specified, nostrings will be parsed as NaN.

Note that if na_filter is passed in as False, the keep_default_na andna_values parameters will be ignored.

na_filterbool, default True

Detect missing value markers (empty strings and the value of na_values). Indata without any NAs, passing na_filter=False can improve the performanceof reading a large file.

verbosebool, default False

Indicate number of NA values placed in non-numeric columns.

skip_blank_linesbool, default True

If True, skip over blank lines rather than interpreting as NaN values.

parse_datesbool or list of int or names or list of lists or dict, default False

The behavior is as follows:

  • boolean. If True -> try parsing the index.

  • list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3each as a separate date column.

  • list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse asa single date column.

  • dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and callresult ‘foo’

If a column or index cannot be represented as an array of datetimes,say because of an unparsable value or a mixture of timezones, the columnor index will be returned unaltered as an object data type. Fornon-standard datetime parsing, use pd.to_datetime afterpd.read_csv. To parse an index or column with a mixture of timezones,specify date_parser to be a partially-appliedpandas.to_datetime() with utc=True. SeeParsing a CSV with mixed timezones for more.

Note: A fast-path exists for iso8601-formatted dates.

infer_datetime_formatbool, default False

If True and parse_dates is enabled, pandas will attempt to infer theformat of the datetime strings in the columns, and if it can be inferred,switch to a faster method of parsing them. In some cases this can increasethe parsing speed by 5-10x.

keep_date_colbool, default False

If True and parse_dates specifies combining multiple columns thenkeep the original columns.

date_parserfunction, optional

Function to use for converting a sequence of string columns to an array ofdatetime instances. The default uses dateutil.parser.parser to do theconversion. Pandas will try to call date_parser in three different ways,advancing to the next if an exception occurs: 1) Pass one or more arrays(as defined by parse_dates) as arguments; 2) concatenate (row-wise) thestring values from the columns defined by parse_dates into a single arrayand pass that; and 3) call date_parser once for each row using one ormore strings (corresponding to the columns defined by parse_dates) asarguments.

dayfirstbool, default False

DD/MM format dates, international and European format.

cache_datesbool, default True

If True, use a cache of unique, converted dates to apply the datetimeconversion. May produce significant speed-up when parsing duplicatedate strings, especially ones with timezone offsets.

New in version 0.25.0.

iteratorbool, default False

Return TextFileReader object for iteration or getting chunks withget_chunk().

Changed in version 1.2: TextFileReader is a context manager.

chunksizeint, optional

Return TextFileReader object for iteration.See the IO Tools docsfor more information on iterator and chunksize.

Changed in version 1.2: TextFileReader is a context manager.

compression{‘infer’, ‘gzip’, ‘bz2’, ‘zip’, ‘xz’, None}, default ‘infer’

For on-the-fly decompression of on-disk data. If ‘infer’ andfilepath_or_buffer is path-like, then detect compression from thefollowing extensions: ‘.gz’, ‘.bz2’, ‘.zip’, or ‘.xz’ (otherwise nodecompression). If using ‘zip’, the ZIP file must contain only one datafile to be read in. Set to None for no decompression.

thousandsstr, optional

Thousands separator.

decimalstr, default ‘.’

Character to recognize as decimal point (e.g. use ‘,’ for European data).

lineterminatorstr (length 1), optional

Character to break file into lines. Only valid with C parser.

quotecharstr (length 1), optional

The character used to denote the start and end of a quoted item. Quoteditems can include the delimiter and it will be ignored.

quotingint or csv.QUOTE_* instance, default 0

Control field quoting behavior per csv.QUOTE_* constants. Use one ofQUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).

doublequotebool, default True

When quotechar is specified and quoting is not QUOTE_NONE, indicatewhether or not to interpret two consecutive quotechar elements INSIDE afield as a single quotechar element.

escapecharstr (length 1), optional

One-character string used to escape other characters.

commentstr, optional

Indicates remainder of line should not be parsed. If found at the beginningof a line, the line will be ignored altogether. This parameter must be asingle character. Like empty lines (as long as skip_blank_lines=True),fully commented lines are ignored by the parameter header but not byskiprows. For example, if comment='#', parsing#empty\na,b,c\n1,2,3 with header=0 will result in ‘a,b,c’ beingtreated as the header.

encodingstr, optional

Encoding to use for UTF when reading/writing (ex. ‘utf-8’). List of Pythonstandard encodings ... versionchanged:: 1.2

When encoding is None, errors="replace" is passed toopen(). Otherwise, errors="strict" is passed to open().This behavior was previously only the case for engine="python".

dialectstr or csv.Dialect, optional

If provided, this parameter will override values (default or not) for thefollowing parameters: delimiter, doublequote, escapechar,skipinitialspace, quotechar, and quoting. If it is necessary tooverride values, a ParserWarning will be issued. See csv.Dialectdocumentation for more details.

error_bad_linesbool, default True

Lines with too many fields (e.g. a csv line with too many commas) will bydefault cause an exception to be raised, and no DataFrame will be returned.If False, then these “bad lines” will dropped from the DataFrame that isreturned.

warn_bad_linesbool, default True

If error_bad_lines is False, and warn_bad_lines is True, a warning for each“bad line” will be output.

delim_whitespacebool, default False

Specifies whether or not whitespace (e.g. ' ' or ' ') will beused as the sep. Equivalent to setting sep='\s+'. If this optionis set to True, nothing should be passed in for the delimiterparameter.

low_memorybool, default True

Internally process the file in chunks, resulting in lower memory usewhile parsing, but possibly mixed type inference. To ensure no mixedtypes either set False, or specify the type with the dtype parameter.Note that the entire file is read into a single DataFrame regardless,use the chunksize or iterator parameter to return the data in chunks.(Only valid with C parser).

memory_mapbool, default False

If a filepath is provided for filepath_or_buffer, map the file objectdirectly onto memory and access the data directly from there. Using thisoption can improve performance because there is no longer any I/O overhead.

float_precisionstr, optional

Specifies which converter the C engine should use for floating-pointvalues. The options are None or ‘high’ for the ordinary converter,‘legacy’ for the original lower precision pandas converter, and‘round_trip’ for the round-trip converter.

Changed in version 1.2.

storage_optionsdict, optional

Extra options that make sense for a particular storage connection, e.g.host, port, username, password, etc., if using a URL that willbe parsed by fsspec, e.g., starting “s3://”, “gcs://”. An errorwill be raised if providing this argument with a non-fsspec URL.See the fsspec and backend storage implementation docs for the set ofallowed keys and values.

New in version 1.2.

pandas.read_table — pandas 1.2.5 documentation (2024)
Top Articles
Latest Posts
Article information

Author: Clemencia Bogisich Ret

Last Updated:

Views: 6241

Rating: 5 / 5 (80 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Clemencia Bogisich Ret

Birthday: 2001-07-17

Address: Suite 794 53887 Geri Spring, West Cristentown, KY 54855

Phone: +5934435460663

Job: Central Hospitality Director

Hobby: Yoga, Electronics, Rafting, Lockpicking, Inline skating, Puzzles, scrapbook

Introduction: My name is Clemencia Bogisich Ret, I am a super, outstanding, graceful, friendly, vast, comfortable, agreeable person who loves writing and wants to share my knowledge and understanding with you.