Apache Tez -Overview

Tamil Selvan K
4 min readOct 31, 2020

You might have used Tez extensively if you are using HDP distribution of Hive but if you are new to the HDP/CDP or have used Hive on MR only, then this article will give you an quick overview on what Apache Tez is and how it uses the existing Yarn Architecture to speed up the Query Execution for Hive and Pig. Tez relies on Yarn for Running the tasks and it requires a durable shared filesystem accessible through Hadoop Filesystem interface.

Traditional MapReduce Execution has proved very reliable for Data Processing in a cluster environment but at the same time it has few limitations. Frequent read/writes, overhead in job startups, no caching etc were few of them which are addressed via Apache Tez. So what is Tez?

Tez is a framework that is build on top of Apache Hadoop 2.0 (Yarn). It uses Yarn for cluster Management and resource allocation. Tez AppMaster processes the Job Plan by calling the Yarn Resource Manager to allocate the task containers. Finally the Node Manager executes the worker processes within the allocated container. Tez is completely a client side application and does not require any other service to run separately.

Tez APIs: Tez Provides 2 APIs

DAG API: Defines the Structure of the Data Flow. Defines Complex Data Flow pipelines using simple graph DAG APIs
Runtime API: Transforms the Data and used to specify what actually executes in each task

Tez Components:

MR vs Tez Execution:
Tez creates a DAG plan to create graph Data pipeline and avoid multiple Disk IO

Tez AppMaster and Task Container:

As far as YARN is concerned, the Containers allocated to a Tez job are all equivalent and are opaque. The TezAppMaster takes full responsibility to use the containers to implement an effective job runtime.
The TezAppMaster is responsible for dealing with transient container execution failures and must respond to Resource Manager requests regarding allocated and possibly deallocated Containers. It is a single point of failure for a given job.

Tuning the AppMaster and Task Container Size:

  1. tez.am.resource.memory.mb = Tez AppMaster memory
    Set tez.am.resource.memory.mb to be the same as yarn.scheduler.minimum-allocation-mb the YARN minimum container size if out-of-memory (OOM) errors occur under a realistic workload with that setting, start bumping up the number as a multiple of yarn.scheduler.minimum-allocation-mb, but do not exceed the value of yarn.scheduler.maximum-allocation-mb.
Ideally "tez.am.resource.memory.mb=4096mb" should be sufficient for most of the workload jobs.

2. hive.tez.container.size = Tez Task Container memory
hive.tez.container.size to be the same as or a small multiple of YARN container size yarn.scheduler.minimum-allocation-mb but NEVER more than yarn.scheduler.maximum-allocation-mb.

Ideally “hive.tez.container.size=4096mb” but you can increase this value if you Tez jobs are failing with Container OOM. Try increasing it to 8 or 12 gb accordingly

3. hive.auto.convert.join.noconditionaltask.size
If hive.auto.convert.join.noconditionaltask is off, this parameter does not take effect. However, if it is on, and the sum of size for n-1 of the tables/partitions for an n-way join is smaller than this size, the join is directly converted to a mapjoin (there is no conditional task).
The parameter hive.auto.convert.join.noconditionaltask.size enables the user to control what size table can fit in memory. This value represents the sum of the sizes of tables that can be converted to hashmaps that fit in memory.
Refer: http://www.openkb.info/2016/01/difference-between-hivemapjoinsmalltabl.html

Tez Libraries:
Under tez-site.xml you can check for the property ‘tez.lib.uris’. It should point to ‘tez.tar.gz’ in a Distributed Storage location (HDFS). During the job run this will be localized on all the Node Manager clients

<property> <name>tez.lib.uris</name> <value>/hdp/apps/${hdp.version}/tez/tez.tar.gz</value> <description>Comma-delimited list of the location of the Tez libraries which will be localized for DAGs. Specifying a single .tar.gz or .tgz assumes that a compressed version of the tez libs is being used. This is uncompressed into a tezlibs directory when running containers, and tezlibs/;tezlibs/lib/ are added to the classpath (after . and .*). If multiple files are specified — files are localized as regular files, contents of directories are localized as regular files (non-recursive). </description> </property>

Tez Sessions:
When a Hive Query is run from a Client or JDBC Tool, it start a Tez session which starts and holds one or more containers which can be reused to run multiple Queries. This gives additional performance capabilities for Interactive Queries as we don’t have to wait for Yarn to allocate the Containers each time a new Query is run in a same session.

Enable Tez Container Reuse — This configuration parameter determines whether Tez will reuse the same container to run multiple queries. Enabling this parameter improves performance by avoiding the memory overhead of reallocating container resources for every query.

tez.am.container.reuse.enabled=true
tez.am.container.idle.release-timeout-min.millis
(The minimum amount of time to hold on to a container that is idle. Only active when reuse is enabled)

Summary:
Faster Execution, plan is generated before the job is executed.
Logical DAG is created for Complex Data Pipeline
Tez Container reuse -> Reduces overhead of launching JVM, negotiation with RM, Reduces IO etc
Multiple Jobs in single AM and shared Data between DAGs

Additional Readings:

https://www.cloudera.com/products/open-source/apache-hadoop/apache-tez.html
https://www.slideshare.net/Hadoop_Summit/apache-tez-present-and-future
https://tez.apache.org/

--

--