Search
Hao Zhu

Resource Allocation Configuration for Spark on YARN

August 19, 2020

Original Post Information:

"authorDisplayName": "Hao Zhu",
"publish": "2015-09-11T07:00:00.000Z",
"tags": "spark"

In this blog post, I will explain the resource allocation configurations for Spark on YARN, describe the yarn-client and yarn-cluster modes, and include examples.

Spark can request two resources in YARN; CPU and memory. Note that Spark configurations for resource allocation are set in spark-defaults.conf, with a name like spark.xx.xx. Some of them have a corresponding flag for client tools such as spark-submit/spark-shell/pyspark, with a name like --xx-xx. If the configuration has a corresponding flag for client tools, you need to put the flag after the configurations in parenthesis"()". For example:

spark.driver.cores 
(--driver-cores)

1. yarn-client vs. yarn-cluster mode

There are two deploy modes that can be used to launch Spark applications on YARN per Spark documentation:

  • In yarn-client mode, the driver runs in the client process and the application master is only used for requesting resources from YARN.
  • In yarn-cluster mode, the Spark driver runs inside an application master process that is managed by YARN on the cluster, and the client can go away after initiating the application.

2. Application Master (AM)

a. yarn-client

Let’s look at the settings below as an example:

[root@h1 conf]# cat spark-defaults.conf |grep am
**spark.yarn.am.cores     4
spark.yarn.am.memory 777m**

By default, spark.yarn.am.memoryOverhead is AM memory * 0.07, with a minimum of 384. This means that if we set spark.yarn.am.memory to 777M, the actual AM container size would be 2G. This is because 777+Max(384, 777 * 0.07) = 777+384 = 1161, and the default yarn.scheduler.minimum-allocation-mb=1024, so 2GB container will be allocated to AM. As a result, a (2G, 4 Cores) AM container with Java heap size -Xmx777M is allocated:

Assigned container container_1432752481069_0129_01_000001 of capacity

<memory:2048, vCores:4, disks:0.0>

b. yarn-cluster

In yarn-cluster mode, the Spark driver is inside the YARN AM. The driver-related configurations listed below also control the resource allocation for AM.

Take a look at the settings below as an example:

MASTER=yarn-cluster /opt/mapr/spark/spark-1.3.1/bin/spark-submit --class org.apache.spark.examples.SparkPi  \
--driver-memory 1665m \
--driver-cores 2 \
/opt/mapr/spark/spark-1.3.1/lib/spark-examples*.jar 1000

Since 1665+Max(384,1665*0.07)=1665+384=2049 > 2048(2G), a 3G container will be allocated to AM. As a result, a (3G, 2 Cores) AM container with Java heap size -Xmx1665M is allocated:
Assigned container container_1432752481069_0135_02_000001 of capacity

<**memory:3072, vCores:2**, disks:0.0>

3. Containers for Spark executors

For Spark executor resources, yarn-client and yarn-cluster modes use the same configurations:

In spark-defaults.conf, spark.executor.memory is set to 2g.

Spark will start 2 (3G, 1 core) executor containers with Java heap size -Xmx2048M: Assigned container container_1432752481069_0140_01_000002 of capacity

<**memory:3072, vCores:1**, disks:0.0>

Assigned container container_1432752481069_0140_01_000003 of capacity

<**memory:3072, vCores:1**, disks:0.0>

However, one core per executor means only one task can be running at any time for one executor. In the case of a broadcast join, the memory can be shared by multiple running tasks in the same executor if we increase the number of cores per executor.

Note that if dynamic resource allocation is enabled by setting spark.dynamicAllocation.enabled to true, Spark can scale the number of executors registered with this application up and down based on the workload. In this case, you do not need to specify spark.executor.instances manually.

Key takeaways:

  • Spark driver resource related configurations also control the YARN application master resource in yarn-cluster mode.
  • Be aware of the max (7%, 384m) overhead off-heap memory when calculating the memory for executors.
  • The number of CPU cores per executor controls the number of concurrent tasks per executor.

In this blog post, you’ve learned about resource allocation configurations for Spark on YARN. If you have any further questions, please reach out to us via Slack.

Make sure you check the HPE DEV blog regularly to view more articles on this subject.

Related

Carol McDonald

Datasets, DataFrames, and Spark SQL for Processing of Tabular Data

Aug 19, 2020
Carol McDonald

Fast data processing pipeline for predicting flight delays using Apache APIs: Kafka, Spark Streaming and Machine Learning (part 1)

Oct 21, 2020
Nicolas Perez

How to Log in Apache Spark

Aug 19, 2020
Abhishek Kumar Agarwal

Seamless data engineering for financial services

Sep 11, 2023
Carol McDonald

Spark 101: What Is It, What It Does, and Why It Matters

Jul 3, 2020
Carol McDonald

Streaming Machine learning pipeline for Sentiment Analysis using Apache APIs: Kafka, Spark and Drill - Part 1

Oct 28, 2020
Carol McDonald

Streaming ML pipeline for Sentiment Analysis using Apache APIs: Kafka, Spark and Drill - Part 2

Mar 31, 2021
Carol McDonald

Tips and Best Practices to Take Advantage of Spark 2.x

Jul 8, 2020

HPE Developer Newsletter

Stay in the loop.

Sign up for the HPE Developer Newsletter or visit the Newsletter Archive to see past content.

By clicking on “Subscribe Now”, I agree to HPE sending me personalized email communication about HPE and select HPE-Partner products, services, offers and events. I understand that my email address will be used in accordance with HPE Privacy Statement. You may unsubscribe from receiving HPE and HPE-Partner news and offers at any time by clicking on the Unsubscribe button at the bottom of the newsletter.

For more information on how HPE manages, uses, and protects your personal data please refer to HPE Privacy Statement.