Questions and Answers


How many ways could you have a table with no data (0 rows) in Impala?

  1. CREATE TABLE and then don't do anything.
  2. CTAS that doesn't match any rows.
  3. Some data files but in unrecognized format. (Expect errors during query.)
  4. Data inserted outside Impala's control and you didn't REFRESH yet.
  5. CREATE EXTERNAL TABLE or ALTER TABLE ... SET LOCATION and there was nothing in the HDFS directory.
  6. Partitioned table with some ADD PARTITIONs; directories but no actual data files.
  7. Some files in the data directory but hidden by leading . or _ in their names.

Takeaway

Examples

make_table_empty.log


What is the smallest non-zero amount of data you could possibly put into an Impala table?

It depends.

  1. The smallest text data file you could create would be 2 bytes, therefore a digit 0-9 or a single character followed by a newline
  2. The smallest data representation would be a Boolean value, but if you put that into a text file it would be 'true' or 'false' plus a carriage return.
  3. If you put the Boolean value into a binary file, it would be encoded possibly into a compact form, but the binary file format might have its own bytes of overhead.
  4. Even a NULL would be represented in text as \N, for a total of 3 characters (including newline).

Takeaway

Examples

tiny_tiny_table.log


What are some of the ways a table can be "big"?

  1. Lots of rows.
  2. Lots of columns, i.e. "wide".
  3. Lots of partitions, i.e. "granular".
  4. Big values in some columns.
  5. Large total data volume.
  6. Large data volume in each partition.
  7. Many data files.

Takeaway

There are different considerations and different sweet spots for each.

  1. Consider partitioning. Choose efficient data types. Look at column characteristics to choose file format, e.g. sparse values or good compressibility favor Parquet.
  2. Crunch data from selected columns? Parquet a good choice. Look up all associated values for single row? HBase a good choice.
  3. Consider reducing granularity to get more data in each partition. Consider dropping old partitions, logically via external table, not necessarily actually get rid of the data. Could potentially run into file descriptor limits during dynamic partitioned insert for Parquet.
  4. Sometimes Impala can read bigger values than it can insert or manipulate, e.g. up to 1 MB for strings.
  5. Look at the file sizes and try to find the sweet spot with # of files vs. # of HDFS blocks. Consider Parquet. Consider partitioning.
  6. If necessary, reduce granularity of partitioning columns. Or consider compacting data in partition to find the sweet spot with # of files vs. # of HDFS blocks.

Examples

big_tables.log


How many ways could you use Impala to calculate 2+2?

  1. SELECT 2+2 -- Alternative to Oracle's DUAL table.
  2. SELECT 2+2 FROM t1 -- but t1 better have exactly one row in it. Exactly like Oracle's DUAL table.
  3. SELECT 2+2 FROM t1 LIMIT 1 -- but t1 better have some data in it.
  4. Put 2,2 into a table, SELECT c1+c2 FROM t1.
  5. INSERT 2 rows, INSERT 2 rows, SELECT COUNT(*) FROM t1.
  6. Put 2 rows in a data file, SELECT COUNT(*) FROM a UNION ALL of the table with itself.
  7. Put 2 rows in a data file, SELECT COUNT(*) FROM a join of the table with itself.
  8. SELECT <query with a big result set> LIMIT 2+2;
  9. Make a column with a 2-character value, SELECT LENGTH(CONCAT(c1,c1)).
  10. Make columns with arbitrary length values, SELECT LENGTH(SUBSTR(c1,1,2)) + LENGTH(REGEXP_EXTRACT(c2,'^(..)',1) WHERE LENGTH(c1) >= 2 AND LENGTH(c2) >= 2 LIMIT 1.

Takeaway

Examples

two_plus_two.log


You have a table with N rows. How do you control how many rows are in the result set?

  1. LIMIT puts a cap on the size of the result set.
  2. AND makes the result set smaller.
  3. No WHERE clause? Same number of results as input rows.
  4. Aggregation functions may condense the results.
  5. GROUP BY expands the results from aggregation functions.
  6. UNION expands the result set (but no dupes); UNION ALL even more (allows dupes).
  7. JOIN can make the result set much smaller or much larger. Think through!

Takeaway

Examples

ten_rows.log


You have a table with N columns. How do you control how many columns are in the result set?

  1. SELECT * produces N columns.
  2. You can name specific columns, in any order or subset.
  3. Also with aliases to give different names.
  4. You can intermix expressions, function calls, other things that aren't actual columns.

Takeaway

You can subset, rearrange, expressionize, and alias your way to any "shape" of a result set you want. Combine with the techniques from the previous question to fine-tune your result set in both dimensions.

Examples

four_columns.log


For a table with a million (or a billion) rows, how much memory does it take to do SELECT c1 or SELECT COUNT(DISTINCT c1)?

  1. Depends on cardinality of c1, type of c1, average length of c1 values, number of nodes involved in query, and number of cores per host.
  2. Only depends a little on file format.
  3. With more than one DISTINCT column, memory usage increases proportionally based on column size.
  4. Turn on APPX_COUNT_DISTINCT and the answer changes -- memory use is bounded but answer is approximate.

Takeaway

Examples

count_distinct.log


Why would you use Impala vs. Hive?

  1. They are complementary. Run Impala on Hive-created tables, or vice versa. Impala is faster and less cryptic and Java-ish for lots of stuff. Run queries side-by-side to see.
  2. You use Hive for multi-hour critical parts of the ETL process. It saves to disk at each step. It has some write/transform capabilities that Impala doesn't yet. Then Impala can pick up the results and run analytic queries on the data.
  3. Some people have cut hours out of their ETL pipelines by switching some stages from Hive to Impala. As Impala gets more built-in functions, file format writers, etc. the balance can shift more.
  4. Just think how many different SQL command lines there are on an Oracle Big Data Appliance. Oracle, MySQL, PostgreSQL, Impala, Hive. More arrows in the quiver.

Takeaway

Examples

Blog post


INSERT INTO t1 SELECT * FROM raw_data - What performance and scalability considerations are there for a copy operation like this?

  1. Logically, the end result is the same. Physically, it depends, and that's where performance and scalability come into the picture.
  2. If destination is a text table, all the data gets expanded to string format and written out to text files. Very simple and predictable, but slow to query due to bulky data and string-to-number/timestamp/Boolean conversions.
  3. Data could also be converted from Parquet back to text; one big file could be split into several; many small files could be compacted into few big ones.
  4. If T2 is a Parquet table, all the data gets organized by column, encoded into binary format, then compressed, and written out to files with bigger-than-normal block size. Slower and more memory-intensive to write, but faster to query.
  5. The overall number of data files depends on the number of nodes that participate in the SELECT portion because they hold data from the source table.
  6. If T3 is a partitioned table, one or more columns at the end of the SELECT * are represented as subdirectories in HDFS, not stored in the data files. The number and sizes of data files is determined by the cardinality of those columns. Queries should use those columns for filtering.
  7. If T4 is a partitioned Parquet table, it could be extra memory-intensive to buffer all those big HDFS data blocks while constructing data for each individual partition. Important to have run COMPUTE STATS on the source table RAW_DATA first.

Takeaway

File format and/or partitioning are important considerations for all write operations. The characteristics of the destination table are very important in INSERT operations.

Examples

N/A, we'll see variations of file formats and partitioning in other examples.


Discuss what's really happening inside aggregation function calls. How does GROUP BY change things?

SELECT COUNT(c2), SUM(c2), MAX(c2), MIN(c2), AVG(c2), COUNT(DISTINCT c2) FROM t1;
SELECT c1, COUNT(c2), SUM(c2), MAX(c2), MIN(c2), AVG(c2), COUNT(DISTINCT c2) FROM t1
  GROUP BY c1;

For queries without GROUP BY

  1. COUNT keeps a single counter value. A BIGINT since you don't know in advance how many there could be.
  2. SUM keeps a single accumulator value. A BIGINT or DOUBLE since you the sum could overflow the type of the actual column.
  3. MAX and MIN keep a single highest/lowest value. The same type as the column itself.
  4. AVG keeps two intermediate values, equivalent to a SUM and a COUNT. Then divides at the end and only returns the result of the division.
  5. COUNT(DISTINCT) keeps a hash table of all the different values. Therefore strongly prefer to do on a numeric ID column rather than a string, to keep memory usage within sensible bounds for high-cardinality columns.

For queries with GROUP BY

  1. COUNT keeps a single counter value for each different value of the GROUP BY column (c1).
  2. SUM keeps a single accumulator value for each different value of the GROUP BY column (c1).
  3. MAX and MIN keep a single highest/lowest value for each different value of the GROUP BY column (c1).
  4. AVG keeps two intermediate values for each GROUP BY value, equivalent to a SUM and a COUNT.
  5. There could be just a few of these intermediate counter, sum, etc. values (say if C1 was really STATE).
  6. Or there could be billions of these intermediate counter, sum, etc. values (say if C1 was a unique column).

The distribution and parallelization Depends on how many nodes actually hold data relevant for the query:

Takeaway

Aggregation is a crucial aspect of Big Data queries. Function calls, DISTINCT, and GROUP BY all factor in.

Examples

aggregate_memory_usage.log, kings_of_france.log


What are the implications of normalized vs. denormalized data?

Takeaway

Examples

normalize.log


What are some ways "big data" use cases differ from traditional database work?

Now we dig another level deep into the data, looking for insights and trends rather than simple answers.

Takeaway

Examples

What is wrong with using a million rows of data? (O'Reilly blog post)


How would I synthesize some data to use?

  1. Generate text files with random or calibrated distributions and ranges.
  2. Shoot for billions of rows / tens of GB of data as a starting point.
  3. INSERT...SELECT variations of existing data back into the same table.
  4. Use ROW_NUMBER() to assign unique IDs, or DENSE_RANK() to assign category IDs.
  5. INSERT...SELECT into a different table to convert to Parquet.
  6. Use now() +/- INTERVAL to generate time sequences.
  7. Be careful doing INSERT...SELECT with 10 years of data into partitioned Parquet table. COMPUTE STATS, consider using multiple INSERTs with static partitioning, be ready to hint SHUFFLE.

Takeaway

Generate a realistic volume and distribution of data. Convert/arrange into realistic layout. "Realistic" is the key for all testing, prototyping, benchmarking.

Examples

synthesize.log, multicols.py, usa_cities.lst, multicols.log


What are the ramifications as cluster size increases?

  1. Special cases for cluster size 1, 2, 3.
  2. Cluster size 4, now you have choice of namenode on same machine as datanode or not.
  3. Cluster size 5+, now some blocks are not on all nodes.
  4. As cluster size increases:
  5. Chance of node failure during query increases, but that's not so bad, just retry.
  6. Chance of "1 slow node" increases.
  7. More of the cluster potentially sits idle during each query. Needs higher volume / more data blocks to keep the cluster "busy". You can get higher scalability without necessarily higher throughput for low-volume queries.

Takeaway

Bigger cluster helps for scalability. Different issues as cluster grows.

Examples

N/A


What are ramifications of cluster topology?

  1. Coordinator node always does a little more work than others.
    • Thus round-robin the requests via a load-balancing proxy to spread out this extra load.
  2. The namenode has extra work to do to serve up the block metadata and do metastore stuff.
    • Thus when you have a big enough cluster, don't use it as a datanode.
    • But do co-locate the Impala statestore and catalog server there.
  3. One slow or low-powered node can drag down query performance.
    • Because Impala divides up the work equally.
    • E.g. that guy with an 8GB node and a much bigger node. Laptop running out of memory would demolish query performance.
  4. The more nodes, the more "aggregate memory" there is for big aggregations and joins.

Takeaway

Prefer beefy nodes of uniform capacity.

Examples

N/A


What are the ramifications of compression?

  1. Less disk use, less network, but more CPU and potentially more memory.
    • Hadoop typically has high-powered disks, big blocks; so modest reduction in I/O not necessarily a big deal.
  2. Works better built into binary formats rather than applied to text.
  3. Gzipped text not splittable. Only supported in 2.x. Memory efficiency improved in 2.1.
    • E.g. with Apache logs, maybe use only for initial ingest and then one-time convert to Parquet.
    • Keep size of .gz files to < 128 MB, then the non-splittability wouldn't be a big deal.
  4. LZO text requires extra indexing step and
  5. It's the icing on the cake with Parquet; can even turn it off and still get compact data.
  6. Can be applied or not or different codec on a file-by-file basis, not an inherent property of the table.
  7. Parquet + codec likely to be better compression than text + codec due to encoding and rearranging.

Takeaway

Revisit your assumptions from previous experience, and test with realistic data volumes.

Examples

N/A, we'll see compressed and uncompressed data size in other examples.


What does the ETL cycle look like?

Takeaway

Impala typically comes in near the end of the cycle.

Examples

N/A, we'll see different ways of bringing data into Impala in other examples.


How do you know a column is unique? No nulls? Only contains a specific set of values?

Takeaway

Examples

Some examples of checking assumptions in kings_of_france.log.


What's wrong with SELECT * FROM t1 in a Hadoop / Big Data context?

  1. You don't know in advance how many rows are in t1, could be zillions. Could have changed since last you checked.
  2. You don't know in advance how many columns are in t1, could be thousands. Could have changed since you last checked.
  3. In a table with billions of rows, collecting unnecessary columns for the result set could waste gigabytes of memory.
  4. In a table with billions of rows in Parquet format, reading unnecessary columns could waste gigabytes of I/O.

Takeaway

Some sloppy habits that you could get away with before, are harder to get away with in Hadoop.

  1. Use LIMIT, WHERE, aggregate functions.
  2. Use DESCRIBE to see all the columns. (Table definition might still change afterward.) Use views to define subsets of columns.
  3. Use EXPLAIN to see preliminary estimates of memory usage, SUMMARY to see report of actual memory usage.
  4. Use EXPLAIN to see preliminary estimates of I/O reads, SUMMARY to see report of actual I/O reads.

Pro tip: ALTER TABLE DROP COLUMN to get rid of unused columns from the end. Doesn't actually change data files!

Examples

N/A, we'll see lots of examples of LIMIT and specific column queries in other examples.


How do you recover if you made a mistake in your schema definition? (Schema evolution.)

  1. Table in wrong database (e.g. DEFAULT)? ALTER TABLE t1 RENAME TO db.t1;
  2. Wrong table name? ALTER TABLE t1 RENAME TO t2;
  3. Wrong column name? ALTER TABLE t1 CHANGE c1 c2 <same_type>;
  4. Wrong column type? ALTER TABLE t1 CHANGE c1 c1 <other_type>;
  5. Need an extra column? ALTER TABLE t1 ADD COLUMNS (c7 <type>, c8 <type>, ...);
  6. Don't need that last column? ALTER TABLE t1 DROP COLUMN c8;
  7. Completely different set of columns? ALTER TABLE t1 REPLACE COLUMNS (c1 <type>, c2 <type>, ...);

Takeaway

Examples

N/A, these DDL statements illustrated in various examples.


How do you deal with input text files in varied or inconsistent formats?

  1. Set delimiter character for simple delimited files.
  2. Rewrite to remove extraneous quotes etc. for over-ambitious text formats.
  3. Create a view to regularize over-ambitious text formats if you can't change original files.
  4. Then copy to a new table in "good" format as soon as possible.
  5. Create a view or copy to a new table to handle unexpected nulls, duplicates, other relational glitches.

Takeaway

Can handle troublesome data logically (with views) or physically (copying and rewriting).

Examples

text_tables.log


What are some trouble signs to look for in super-complicated queries?

  1. Some table missing stats.
  2. Excessive function calls. Maybe you could precompute in a new table, or collapse into a single UDF (e.g. CONCAT() of several substrings).
  3. Some function result that can't be turned into a constant or codegen'ed.
  4. If the function is on a partition key column, is it inhibiting partition pruning? Check via EXPLAIN.
  5. Scrutinize the innermost loop of any query with subqueries. Make it as selective as possible.

Takeaway

Little trouble spots can turn into big performance bottlenecks when the data volume is high.

Examples

N/A for this "Getting Started" tutorial.


What problems could arise if the data is too small?

  1. Misleading benchmark results if you scaled data down in effort to produce volume every system could handle.
  2. Too-small data in each partition, bottleneck in partition pruning, "many small files" problem. Prefer files close to HDFS block size of 128 MB!
  3. Not enough HDFS blocks = not enough parallelism to take advantage of big cluster.
  4. E.g. 10-node cluster, 9 blocks = idleness. 10-node cluster, 11 blocks = half speed. 10-node cluster, 10 blocks = danger of skew due to HDFS block placement. 10-node cluster, 100 blocks = better chance of even distribution.
  5. Small fixed startup costs like codegen eat up the benefits of parallel execution. SET DISABLE_CODEGEN=true when lots of tiny queries.

Takeaway

Use realistic data volumes. Look for use cases that exceed the capacity of other systems.

Examples

N/A


What are the ways data files could be laid out?

  1. Data files in local filesystem? No good, have to be in HDFS.
  2. Data files all in same directory in HDFS? Want to leave them there? CREATE EXTERNAL TABLE. No partitioning.
  3. Data files all in same directory in HDFS? Don't care if they move? CREATE TABLE + LOAD DATA. Still no partitioning.
  4. Data files all in same directory in HDFS? Want to copy and/or convert them for Impala? CREATE EXTERNAL TABLE + (CREATE TABLE + INSERT...SELECT or CTAS).
  5. For any of these sets of files, they had better all be the same format but they could differ in size, name, compression.

Now we start getting into partitioning...

  1. Could take new set of files, leave in original location, via ALTER TABLE ... ADD PARTITION ... LOCATION.
  2. Could take new set of files, move from original location, via ALTER TABLE ... ADD PARTITION + LOAD DATA PARTITION (...).
  3. Could take new set of files, copy/convert from original location, via ALTER TABLE ... ADD PARTITION + INSERT ... PARTITION (...).
  4. You can do all partitions keys as constants, and fill in WHERE clauses if necessary for data you know a priori.
  5. The more you leave unconstanted in the PARTITION clause, the more flexible but also the more resource-intensive it is.
  6. Different partitions could be different file formats, via ALTER TABLE ... PARTITION ... SET FILEFORMAT.

Takeaway

Lots of flexibility. The variations involve several different statements and clauses.

Examples

hdfs.log


What are the number ranges for Impala data types?

  1. You can deduce basic ranges of *INT types by knowing the types are 1-, 2-, 4-, and 8-byte signed integers.
  2. Plus and minus powers of 2, one bit saved for the sign, range a little farther on the negative side than the positive.
  3. Different behavior if out-of-range values read from data vs. overflowing during computation vs. CASTing.
  4. You can also call min_int() and max_int() functions to check programmatically.
  5. DECIMAL is all in terms of 9s depending on scale/precision.
  6. E.g. DECIMAL(5) = -99999 to 99999; DECIMAL(4,2) = -99.99 to 99.99. Max of 38 '9' digits.
  7. FLOAT and DOUBLE are IEEE-style. You should be familiar from scientific computing to use effectively.
  8. Beware SUM(), AVG() on huge numbers of FLOAT or DOUBLE values, or GROUP BY.

Takeaway

Examples

numbers.log


Isn't it great, I have this table with a million rows!

  1. A million rows isn't substantial enough to be interesting for Big Data. Unlikely to be >1 HDFS data block.
  2. A million rows is more like the "small" lookup table in a join operation. Like maybe every traffic light in the US that would be cross-referenced against the much larger data set of every traffic accident, red light ticket, cycle time, or speed measurement.
  3. Or it's like metadata about every book in a library system. You can count the most popular authors or how many hardcovers there are vs. paperbacks vs. e-books, but you really want to cross-reference against every lending event to gain new insights (which genres take shortest/longest time to read, are there patterns in overdue fines).
  4. A million rows might tempt you into experimenting with partitioning or other techniques for scalability, but again the volume is so small you won't learn anything useful.

Takeaway

Start with a billion rows, at least several GB, to learn anything useful about parallel execution on Hadoop.

Examples

What is wrong with using a million rows of data? (O'Reilly blog post)


How are joins different or special for Hadoop versus traditional databases?

  1. The computer that figures out what rows are applicable from table 1 isn't necessarily the same computer that cross-references against the rows from table 2. Data is distributed between hosts according to blocks, not necessarily entire partitions.
  2. The mechanism for coordinating how the data is read, processed, and transmitted is different depending on which side of the join is bigger. Which could be "more rows", "higher volume of data", "more rows matching the join condition and the WHERE clauses".
  3. That's why COMPUTE STATS is so important.
  4. Maybe you denormalize your data so you don't have to do so many joins.
  5. Cut down the data being joined by joining results of a subquery or WITH clause.
  6. Cross joins could be a really bad idea, that's why you have to request them.
  7. And hints + STRAIGHT_JOIN if you still feel the join order isn't optimal.

Takeaway

This is a subject that is sort of familiar from traditional SQL, but people rarely had to examine it in such depth before distributed queries came along in Hadoop.

Examples

joins.log, cross_join.log


Where is my favorite feature from RDBMS X? Transactions, triggers, constraints...

  1. Some of these are more OLTP-oriented, dealing with 1 row at a time rather than billions of new inserted rows. An RDBMS used for data warehousing would say to turn constraint checking and index maintenance off during data loading.
  2. Since Impala deals with data files prepared by others, integrity checking has to be pushed back farther in the ETL cycle.
  3. HDFS is append-only, so there is not the same easy undo as you get with rolling back a transaction.
  4. Big data analytics tends to have defined windows for data loading and then things are stable for a while. If you are in a frequent-ingest scenario, that's beyond the scope of a "getting started" tutorial.
  5. Of course ETL gets more convenient and reliable over time, as new features get added...
  6. Forget about using EMPLOYEE and DEPARTMENTS tables for anything useful -- too small and OLTP-centric.

Takeaway

Unlearn any habits or mismatched expectations from OLTP days. Get into a business intelligence mindset.

Examples

N/A


How about indexes, where did those go?

  1. An RDBMS would say not to use indexes for queries that read more than 20% of a table. Otherwise it's more work to seek back and forth in the index when you're reading all the data blocks anyway. This is a very common scenario for big data analytics queries.
  2. HDFS blocks are so big, chances are you'll have to read all of them anyway.
  3. Partitioning is sort of like a single major index used by all queries for filtering based on equality and ranges.
  4. The "full table scan" isn't such a scary thing anymore. Move towards "full partition scan" to minimize the I/O for a particular query.

Takeaway

Shift performance mindset towards BI-style full table scans and partitioning.

Examples

partition_keys.log


What things could trip you up with numbers?

  1. Arithmetic expressions might return a bigger type than you expect. For example, 2+2+2+2 is a BIGINT, has to be casted to "fit" into INT or smaller columns. (In some prior releases, 2+2 was also a BIGINT.)
  2. Floating-point values aren't represented precisely. Implications for SUM(), comparison, GROUP BY.
  3. DECIMAL is precise but slower than FLOAT or DOUBLE.
  4. Out-of-range integers bump up against the edges of the range.
  5. No unsigned types.
  6. Difference in conversion behavior for out-of-range or wrong format values between CAST() and reading from disk.

Takeaway

Do experiments and reading to validate all your prior assumptions about numeric types.

Examples

numbers.log


What things could trip you up with dates and times?

  1. Impala is relatively picky about the input format. (But has gotten more relaxed over time.)
  2. UTC vs. local timezone aspect causes glitches going back and forth with Hive.
  3. Impala TIMESTAMP dates go back as far as 1400.
  4. EXTRACT() and TRUNC() are relatively new additions.
  5. You use a TIMESTAMP column as a partition key, only on some other representation of it (possibly split apart). Run the TIMESTAMP through TRUNC() and/or EXTRACT() to get repeated values for grouping.
  6. You might regret turning TIMESTAMP into STRING to use as a partition key.
  7. You might regret doing GROUP BY on a TIMESTAMP column - too granular to be useful.

Takeaway

Examples

date_format.log, partition_keys.log


What things could trip you up with strings?

  1. By default, no length enforcement.
  2. Don't recommend constrained VARCHAR or CHAR types for performance-critical stuff.
  3. Only ASCII is reliable for built-in functions like LENGTH().
  4. Impala can read a longer string from a data file than it can handle in INSERT.
  5. Potential interactions between long strings and Parquet block size.

Takeaway

Examples

create_data_types.log, insert_data_types.log


What things could trip you up with sorting?

  1. ORDER BY always results in some extra work at the end, imposing a delay on the transmission of results back to you or your application. Be mindful when it's needed and when not.
  2. ORDER BY in a view definition doesn't carry over to a query against the view.
  3. ORDER BY in a subquery doesn't carry over to the outer query result.
  4. ORDER BY with LIMIT and/or OFFSET changes the result set and so gets preserved for correctness, but still doesn't apply at the outermost level.
  5. Results that happen to be in sorted order based on layout of data file are not guaranteed. (Especially when file > 1 block.)
  6. DISTINCT doesn't sort its results.
  7. Huge sorts could result in spilling to disk, slowing down overall query.

Takeaway

Examples

sorting.log


You enter SELECT COUNT(*) FROM t1 and get an "unknown table" error. Why could that be?

  1. Maybe you're in the wrong database. SELECT current_database(). USE. SHOW TABLES LIKE... SHOW TABLES IN... SELECT with a fully qualified table name.
  2. Maybe somebody else dropped the table. No transactions, so a table could disappear out from under you.
  3. Maybe you created the table in Hive. INVALIDATE METADATA t1.
  4. Maybe your security settings don't let you see the table. Contact your administrator.

Takeaway

DDL in Impala has its own nuances to learn.

Examples

N/A; the basics of tables and databases illustrated elsewhere, other conditions are relatively rare.


I run a query and it seems slow. How do I figure out what to do?

  1. Is the data big enough to be worth timing? Enough data blocks to parallelize?
  2. Look at EXPLAIN plan to see how much data is being read, see if that makes sense. Maybe you'll learn you need to COMPUTE STATS or do some partition pruning.
  3. Figure out how much data is being read from disk, do back-of-the-envelope calculations to figure out if the read rate or processing rate is out-of-whack. E.g. expect to read about 100 MB / second for each disk. Expect to process N rows per second per host for aggregation. (Understand only 1 core per host here.) Remember that you're expecting to process huge volumes of info, and the laws of physics apply in terms of how many bytes per second can be read and analyzed.
  4. Look at SUMMARY. See if some host taking much longer than average for a particular phase. (Suspect skew or problem with host.) See if some phase taking the bulk of the time. See if anything jumps out at you about efficiency of that phase.
  5. Look at PROFILE. See which phase(s) taking the longest on which host(s). Drill down into physical details of bytes read, memory used, bytes transmitted.
  6. Read up on performance techniques: COMPUTE STATS, partitioning, Parquet file format.
  7. Understand single-threading implications for some operations (aggregations and joins), versus multi-threaded scans.

Takeaway

There is an orderly process to follow for figuring out performance. You'll improve with practice.

Examples

explain_plans.log


How is X, Y, or Z represented in a text-format table? (Numbers, Booleans, date/times, NULLs.)

Variations

Answers

  1. Do an INSERT and then take a look at the resulting text file.
  2. Verbose syntax in CREATE TABLE. ALTER TABLE to set TBLPROPERTIES to change delimiter.
  3. No. Either do a view to strip them off, or copy to a table with enclosing quotes removed. Not strictly "CSV" according to standard spec or custom serdes.
  4. Not currently support for skipping header lines.

Takeaway

Examples

text_tables.log


What could be bottlenecks slowing down a big query?

  1. Admission control. (Affects large and small queries equally.)
  2. A single node has some hardware or network problem slowing it down generally. (Less capacity, heavy load from non-Impala work, etc.)
  3. A single node is doing more work than others.
  4. Some complicated expression fools partition pruning because it isn't resolved to a constant.
  5. Some complicated function is called billions of times and can't be codegen'ed.

Takeaway

Examples

N/A


What are all the kinds of nonexistent or special values?

  1. There's NULL.
  2. The empty string is different from NULL. E.g. when you concat NULL with something else, result is NULL; not so for empty string.
  3. In metastore, -1 signifies a missing value. E.g. in SHOW COLUMN STATS output.
  4. Integer values sometimes wrap to largest/smallest value of applicable type.
  5. Floats and doubles also have NaN and Inf.

Takeaway

There are some Hadoopisms in addition to the SQL NULL aspects.

Examples

special_values.log


What are the differences between my dev/test system and a production environment?

  1. You might do some functional testing on a single-node system. Production will be a cluster.
  2. Production cluster will likely be bigger. Potential different bottlenecks.
  3. Production cluster might have lots of other databases and tables. Catalog scalability issues.
  4. Production cluster likely to have security turned on. You'll need credentials, get privileges from administrator, might not have write access or even visibility into everything.
  5. Production cluster likely to have concurrent Impala users. Higher memory, CPU, disk, network utilization.
  6. Production cluster likely to have resource management and/or admission control. Queueing. Potential failure for big queries.
  7. Production cluster likely to have concurrent non-Impala workloads. Memory limit considerations.
  8. Production cluster likely to have more frequent / structured ETL. You'll access files produced by others, others will access your files. Possibly need to REFRESH periodically.

Takeaway

There's a whole other course's worth for production considerations.

Examples

N/A


Which of the following are OK? (Quoting identifiers.)

Questions

  1. SELECT abc\_def from xyz\_123;
  2. SELECT abc-def from xyz-123;
  3. SELECT 'abc\_def' from 'xyz\_123';
  4. SELECT "abc\_def" from "xyz\_123";
  5. SELECT `abc\_def` from `xyz\_123`;
  6. SELECT `abc\_def` from `db\_name.xyz\_123`;
  7. SELECT `abc\_def` from `db\_name`.`xyz\_123`;
  8. SELECT a#c from 123\_xyz;
  9. SELECT `a#c` from `123\_xyz`;

Answers

  1. OK.
  2. Fails, because of dash in table name. abc-def could be a column expression.
  3. Fails.
  4. Fails.
  5. OK.
  6. Fails because the dot separator is inside the backticks.
  7. OK.
  8. Fails.
  9. Fails. Table name can be created with backticks, column name can't be.

Takeaway

Practice with single quotes, double quotes, backticks.

Examples

quoting_identifiers.log


Which of the following are OK? (Quoting literals.)

Questions

  1. SELECT c1 FROM t1 WHERE c2 = "hello world";
  2. SELECT c1 FROM t1 WHERE c2 = 'hello world';
  3. SELECT c1 FROM t1 WHERE c2 = `hello world`;
  4. SELECT c1 FROM t1 WHERE c2 = hello;
  5. SELECT c1 FROM t1 WHERE c2 = hello world;

Answers

  1. OK.
  2. OK.
  3. Fails.
  4. Possibly OK but misleading - hello treated as a column name.
  5. Fails.

Takeaway

Examples

quoting_literals.log


Which of the following are OK? (Aliasing identifiers.)

Questions

  1. SELECT foo bar FROM baz bletch;
  2. SELECT foo "bar" FROM baz "bletch";
  3. SELECT foo `bar` FROM baz `bletch`;
  4. SELECT foo AS column 1 FROM baz AS table 1;
  5. SELECT foo AS 'column 1' FROM baz AS 'table 1';
  6. SELECT foo AS "column 1" FROM baz AS "table 1";
  7. SELECT foo AS `column 1` FROM baz AS `table 1`;
  8. SELECT foo AS `column 1` FROM `database 1`.baz AS `table 1`;
  9. SELECT * FROM (SELECT MAX(c1) AS biggest FROM t1);
  10. SELECT * FROM (SELECT MAX(c1) FROM t1) AS biggest;

Answers

  1. OK.
  2. OK.
  3. OK.
  4. Fails. Column and table are reserved words.
  5. OK.
  6. OK.
  7. OK.
  8. Fails. You can't have a real database or table name with a space.
  9. Fails. Subquery must be given an alias.
  10. OK since subquery has an alias.

Takeaway

Examples

aliasing.log


Which of the following are OK? (Reserved words.)

Questions

  1. SELECT c1 FROM t1;
  2. SELECT c1 FROM insert;
  3. SELECT c1 FROM "insert";
  4. SELECT c1 FROM 'insert';
  5. SELECT c1 FROM `insert`;
  6. SELECT c1 insert FROM t1;
  7. SELECT c1 AS insert FROM t1;
  8. SELECT c1 FROM data;
  9. SELECT c1 AS as FROM t1;
  10. SELECT c1 FROM lower;
  11. SELECT c1 FROM max;
  12. SELECT `having` AS `group` FROM `insert`.`distinct`;

Answers

  1. OK.
  2. Fails, reserved word requires backticks.
  3. Fails, reserved word requires backticks.
  4. Fails, reserved word requires backticks.
  5. OK.
  6. Fails, reserved word requires backticks.
  7. Fails, reserved word requires backticks.
  8. Fails, reserved word requires backticks.
  9. Fails, reserved word requires backticks.
  10. OK, built-in function names are not reserved words.
  11. OK, aggregate function names are not reserved words.
  12. OK, can escape reserved words in column names, aliases, database names, and table names.

Takeaway

Examples

reserved_words.log


In what circumstances are NULLs allowed in the schema definition?

  1. All columns are nullable.
  2. NULLs in text data files are represented as \N, not empty values or the string 'null'.
  3. Newly added columns are NULL for all data files that don't have them.
  4. Avro schemas need to define all columns as nullable.
  5. There is a full set of nvl(), zeroifnull(), coalesce(), etc. functions for handling NULL values and substituting or skipping as appropriate.

Takeaway

The flexible nature of Impala schema definitions means that you can't practically enforce NOT NULL conditions. Design with the expectations that NULLs might show up anywhere.

Examples

nulls.log


Where can you find Impala?

  1. The starting point for all things Impala is the Impala.io web site.
  2. You can try it free on Cloudera Live with no installation.
  3. There's the QuickStart VM for single-node experimentation.
  4. It comes with any CDH 5 installation.
  5. You can download it for CDH 4, but that's getting long in the tooth. Stick with CDH 5.
  6. You can spin up instances under Amazon EC2.
  7. You get it as part of Oracle Big Data Appliance.
  8. MapR includes it in their distribution.

Takeaway

You can try out the functionality in lots of places. Some require more serious HW / SW commitment than others.

Examples

Some URLs to browse...

Cloudera Live

http://www.cloudera.com/content/cloudera/en/products-and-services/cloudera-live.html (requires signup)

Read-only demo: http://go.cloudera.com/cloudera-live.html (see link at bottom to skip this signup page)

QuickStart VM

Docs: http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cloudera_quickstart_vm.html

Download: http://www.cloudera.com/content/cloudera/en/downloads/quickstart_vms/cdh-5-3-x.html

CDH Distro

http://www.cloudera.com/content/cloudera/en/downloads/cdh/cdh-5-3-1.html

Amazon EMR

Announcement: http://aws.amazon.com/about-aws/whats-new/2013/12/12/announcing-support-for-impala-with-amazon-elastic-mapreduce/

Docs: http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-impala.html

Tutorial: http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/query-impala.html

Oracle Big Data Appliance

http://www.cloudera.com/content/cloudera/en/solutions/partner/Oracle.html

http://www.oracle.com/technetwork/database/bigdata-appliance/overview/bigdataappliance-datasheet-1883358.pdf

MapR Impala docs

http://doc.mapr.com/display/MapR/Impala


How many ways are there to get a command prompt for Impala?

  1. If you're logged onto any DataNode: impala-shell
  2. If the long prompt bugs you: impala-shell -i localhost
  3. If you're logged onto a non-Impala host: impala-shell -i <datanode>[:25000]
  4. Use Hue.
  5. Some people use Beeline?!

Takeaway

Once you know the hostname or IP, and optionally port, of any DataNode, you're ready to roll.

Examples

N/A. We'll see lots of impala-shell in other examples.


What do you need to know to get around in impala-shell?

  1. End every command with a semicolon. Even quit.
  2. Long commands, break across lines. Be ultra-careful though about mismatched quotes.
  3. Up/down arrow to cycle through past commands.
  4. Left/right arrow to cursor around and edit previous commands.
  5. Readline shortcuts like Ctrl-A, Ctrl-E = beginning/end of line. Alt-B, Alt-F = backward/forward one word. (http://www.bigsmoke.us/readline/shortcuts)
  6. Alt-click on Mac to make cursor jump, but only within same line.

Takeaway

impala-shell is pretty convenient for interactive experimenting and iterating. Just watch out for semicolons and quotes.

Examples

N/A. Try running and then modifying some CREATE/DROP TABLE statements.


What kinds of data types can I use in tables?

  1. Integers - CREATE TABLE int_t (t TINYINT, s SMALLINT, i INT, b BIGINT, dec DECIMAL(10));
  2. Floating-point - CREATE TABLE fp_t (f FLOAT, d DOUBLE, dec DECIMAL(5,3));
  3. Strings - CREATE TABLE str_t (s STRING, v VARCHAR(64), c CHAR(5));
  4. Dates/times - CREATE TABLE temp_t (ts TIMESTAMP);
  5. Booleans - CREATE TABLE bool_t (b BOOLEAN);

Examples

create_data_types.log


How do I get data into a table?

  1. INSERT...VALUES statement.
    • Watch out! INSERT...VALUES not scalable or performant on HDFS.
    • In real life, mainly use INSERT...SELECT to copy table-to-table, with expressions, functions, and CAST() as necessary.
    • Also see LOAD DATA and HDFS-level ETL techniques in other examples.
  2. Integers - literals; arithmetic expressions + CAST() if necessary; function results + CAST() if necessary.
  3. Floating-point - literals; arithmetic expressions + CAST() if necessary; function results + CAST() if necessary.
  4. Strings - literals; function results + CAST() if necessary.
  5. Dates/times - string literals in right format; function results, sometimes chained.
  6. Booleans - true / false literals; assertion expression results.

Examples

insert_data_types.log


How do I make a table be in a specific database?

  1. CREATE DATABASE IF NOT EXISTS db_name;
  2. USE db_name; CREATE TABLE tbl_name ...;
  3. CREATE TABLE db_name.tbl_name ...;
  4. CREATE TABLE tbl_name ... ; ALTER TABLE tbl_name RENAME TO db_name.tbl_name;
  5. impala-shell -d db_name -q 'CREATE TABLE tbl_name ...'

Examples

table_in_db.log


When do I have to cast data types?

  1. More often than you might expect. Impala will automatically up-cast, but not implicitly down-cast to avoid losing precision.
  2. 2+2+2+2 produces a BIGINT, so must be CAST() to a smaller type in e.g. INSERT.
  3. Source table might have bigger types than dest table; CAST() during INSERT ... SELECT.
  4. When inserting multiple values, the presence of a certain type forces all values for that column to be interpreted as that type.
  5. If you don't know what type an expression returns, use it in a CTAS!

Examples

casting.log


How do I empty out an existing table?

  1. No DELETE or TRUNCATE TABLE statements...
  2. INSERT OVERWRITE SELECT <query matching no rows>
  3. INSERT OVERWRITE SELECT <query> -- Empties + puts new data in.
  4. 'hdfs dfs' operations to remove HDFS files + REFRESH <table_name>.
  5. ALTER TABLE <table_name> SET LOCATION '/hdfs/directory/with/no/data/files'
  6. ALTER TABLE SET FILEFORMAT <different_format_than_the_data_files> -- won't be able to see any rows

Examples

make_table_empty.log


How do I work with different date formats?

  1. SELECT now() to see format for current date and time.
  2. <timestamp> +/- INTERVAL n [DAYS|HOURS|YEARS|etc] to produce delta values.
  3. INSERT now()-themed values into a text table, examine the output file.
  4. Combination of unix_timestamp() to produce a number, then from_unixtime() to produce a formatted string.

Examples

date_format.log, Impala docs


You have data with dates. How can you partition by year, month, and day?

  1. If you have a finer-grained date/time value, you'll have to keep that and synthesize a new column with just the date to use as the partition key. Hopefully you know the date for all data files already based on the ETL pipeline.
  2. Represent the whole thing as a single STRING column like '2015-02-18'. Bad idea - prefer numbers for partition keys, rather than strings.
  3. Represent the whole thing as a single INT column like 20150218. Not bad. You can zero in on a particular day or range and Impala will do partition pruning.
  4. Represent the whole thing as a single TIMESTAMP column like '2015-02-18'. Unusual but has potential. You would need to be careful to truncate each value to avoid a huge number of different partitions (for each different hour/second/fractional second).
  5. Split the individual fields into separate STRING columns like '2015','15','18'. Bad idea. 3x the overhead of using a string as a partition key. Lots of string operations like concatenation to match individual dates or ranges in the WHERE clause. Compatible with Hive in terms of leading zeros in month/day directory names like '05'.
  6. Split the individual fields into separate SMALLINT and TINYINT values like 2015,15,18. Good performance, minimal memory usage compared to STRING. (SMALLINT and TINYINT use less memory than INT?) Better able to use dynamic partitioning, e.g. specify year and month in INSERT...PARTITION but leave day blank. Easier to filter, e.g. conditions like 'WHERE month IN (11,12,1)'. Three levels of directories to manage in HDFS. Potential mismatch with Hive directory names which include leading zeros.

Takeaway

Choice of partition key type and granularity is an important decision to think through and test performance and scalability with realistic data volumes.

Examples

partition_keys.log


How can I interface with Impala data through other Hadoop tools?

  1. DESCRIBE FORMATTED <table> always shows you the HDFS directory where table data lives.
  2. If you add or delete data in HDFS outside of Impala, run REFRESH <table_name>.
  3. Use LOAD DATA to move externally created files into an Impala table (starting from HDFS).
  4. Use CREATE EXTERNAL TABLE to point to data files on HDFS outside Impala's control.
  5. Use ALTER TABLE ... SET LOCATION to point to a different HDFS path at any time. Often useful to point a single partition to a specific path.

Takeaway

It is easy to switch back and forth between Impala and other Hadoop tools, and interchange data.

Examples

hdfs.log, text_tables.log


How do I script Impala from the shell?

  1. impala-shell -d <database> to start or execute in a particular database.
  2. impala-shell -q '<query>' to run a single statement.
  3. impala-shell -f '<file>' to run a sequence of statements, or feed SQL code to standard input.
  4. impala-shell --quiet to suppress messages for query timing and echoed commands.
  5. impala-shell -B to produce delimited result sets for queries rather than pretty-printed tables.
    • Very important for avoiding impala-shell overhead during benchmarks with big result sets.
  6. Embed a HERE document to do variable substitution in the SQL code.
  7. Use ! in impala-shell to shell out to a Linux command.

Examples

A HERE document...

#!/bin/bash

export DB_NAME=tpc
export TABLE_NAME=customer_address
export CRITERIA=Oakland
export CUTOFF=20

impala-shell -B -d $DB_NAME <<EOF
select * from $DB_NAME.$TABLE_NAME where ca_city = '$CRITERIA' limit $CUTOFF;
EOF

What are all the ways I can understand the performance and scalability of a query?

  1. EXPLAIN <query> to get plan and estimates ahead of time. Lets you avoid embarrassment of accidentally running a resource hog query that bogs down the cluster.
  2. SET EXPLAIN_LEVEL=<level> to see more or less detail.
  3. SUMMARY to get a concise view of the phases, to look for unusually high timing or memory usage, or outliers.
  4. PROFILE to get a detailed view of everything that happened on every node.

Takeaway

You can choose whichever level of detail you need to feel comfortable or do tuning and troubleshooting.

Examples

explain_plans.log


What analytic functions does Impala have?

  1. DENSE_RANK, FIRST_VALUE, LAG, LAST_VALUE, LEAD, RANK, ROW_NUMBER.
  2. OVER and window clauses. Limited RANGE support in window clauses (current row + unbounded in one or both directions).
  3. If you use these for scenarios like time series, stock ticker, etc. you probably already have them in your code.
  4. Use ROW_NUMBER and DENSE_RANK to synthesize extra columns as unique IDs or category IDs.
  5. These function calls are rewritten internally as join queries.

Takeaway

This is a relatively new Impala feature that opens up new use cases. Dive in and try these functions.

Examples

normalize.log, Impala docs


How do you approach a table you don't understand very well?

  1. First you find it: SHOW TABLES, SHOW TABLES in <db>, SHOW TABLES IN <db> LIKE '*wildcarded_name*'.
  2. Then you take a look at its structure: DESC <table>.
  3. Then you look a little deeper at its innards: DESC FORMATTED <table>.
  4. You probe the physical data files with SHOW TABLE STATS.
  5. If it's partitioned, you examine the partitions with SHOW PARTITIONS. (Actually, this is the same info as SHOW TABLE STATS.)
  6. You see if the table has been analyzed with SHOW COLUMN STATS.
  7. Knowing the volume of data in the table, you gently start probing its contents. (Lots of LIMIT clauses and NDV functions.)

How many databases are there in your Impala instance?

  1. All instances start out with 2, DEFAULT and _IMPALA_BUILTINS.
  2. See all the names with SHOW DATABASES.
  3. If it's a long list, find a particular one with SHOW DATABASES LIKE '*wildcarded_name*'.

Takeaway

Impala has a flat database structure, not divided by owner. There's usually a sweet spot between a single overstuffed database and a bazillion tiny ones.


Why could you access a column in a table today but not tomorrow?

  1. Someone could have done ALTER TABLE ... DROP COLUMN so the column no longer exists.
  2. Someone could have done ALTER TABLE ... CHANGE <old_name> <new_name> <type> and now the column has a different name.
  3. There could be a new Impala release and now <col_name> is a reserved word. (Escape it with backticks to work around.)
  4. What you think of as <table_name> could really be a view, and the view was redefined to access fewer columns. Sometimes views are used like this for security, to restrict access to sensitive columns in a table you can't query directly.

Takeaway

Hadoop generally is lenient about schema evolution. Impala follows in that same spirit. Be prepared for names and types to change.


What are the considerations for file formats?

  1. Text is the easiest and most flexible (hence the default), but also the bulkiest.
  2. Compressed text workable, but not generally a good idea due to non-splittability.
  3. Avro, RCFile, SequenceFile all queryable by Impala. More compact than text but still not ideal for data warehouse-style workloads.
  4. Parquet the preferred format for real-world scenarios:
  5. Columnar store efficient for analytic queries.
  6. Lots of encoding and compression for compactness.
  7. Widely adopted, supported by other Hadoop components.

Takeaway

Divide attention between text for ease of ingest and learning with tiny tables, and Parquet for anything realistic and scalable.

Examples

file_formats.log, Impala docs