org.apache.spark.Properties1.0.2
- Module URI:
- package://pkg.pkl-lang.org/pkl-pantry/org.apache.spark@1.0.2#/Properties.pklcontent_copy
- Source code:
- Properties.pkl
- Known subtypes:
- Known usages:
- All versions:
Properties(show inherited)
-
expand_morelinkhidden
Defaults to all module properties rendered as either Pcf or the format specified on the command line.
-
-
-
-
-
-
expand_morelink
Limit of total size of serialized results of all partitions for each Spark action (e.g.
collect) in bytes. Should be at least 1M, or 0 for unlimited. Jobs will be aborted if the total size is above this limit. Having a high limit may cause out-of-memory errors in driver (depends on spark.driver.memory and memory overhead of objects in JVM). Setting a proper limit can protect the driver from out-of-memory errors.
Default:
1.gib
-
expand_morelink
Amount of memory to use for the driver process, i.e.
where SparkContext is initialized, in the same format as JVM memory strings with a size unit suffix ("k", "m", "g" or "t") (e.g.
512m
,2g
). Note: In client mode, this config must not be set through theSparkConf
directly in your application, because the driver JVM has already started at that point. Instead, please set this through the--driver-memory
command line option or in your default properties file.Default:
1.gib
-
expand_morelink
Amount of non-heap memory to be allocated per driver process in cluster mode, in MiB unless otherwise specified.
This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the container size (typically 6-10%). This option is currently supported on YARN, Mesos and Kubernetes. Note: Non-heap memory includes off-heap memory (when
spark.memory.offHeap.enabled=true
) and memory used by other driver processes (e.g. python process that goes with a PySpark driver) and memory used by other non-driver processes running in the same container. The maximum memory size of container to running driver is determined by the sum ofspark.driver.memoryOverhead
andspark.driver.memory
.Default: driverMemory * spark.driver.memoryOverheadFactor, with minimum of 384
-
expand_morelink
Fraction of driver memory to be allocated as additional non-heap memory per driver process in cluster mode.
This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the container size. This value defaults to 0.10 except for Kubernetes non-JVM jobs, which defaults to 0.40. This is done as non-JVM tasks need more non-JVM heap space and such tasks commonly fail with "Memory Overhead Exceeded" errors. This preempts this error with a higher default. This value is ignored if
spark.driver.memoryOverhead
is set directly.Default:
0.1
-
expand_morelink
Amount of a particular resource type to use on the driver.
If this is used, you must also specify the
spark.driver.resource.{resourceName}.discoveryScript
for the driver to find the resource on startup.Default:
0
-
expand_morelink
A script for the driver to run to discover a particular resource type.
This should write to STDOUT a JSON string in the format of the ResourceInformation class. This has a name and an array of addresses. For a client-submitted driver, discovery script must assign different resource addresses to this driver comparing to other drivers on the same host.
Default:
null
-
expand_morelink
Vendor of the resources to use for the driver.
This option is currently only supported on Kubernetes and is actually both the vendor and domain following the Kubernetes device plugin naming convention. (e.g. For GPUs on Kubernetes this config would be set to nvidia.com or amd.com)
Default:
null
-
expand_morelink
Comma-separated list of class names implementing org.apache.spark.api.resource.ResourceDiscoveryPlugin to load into the application.
This is for advanced users to replace the resource discovery class with a custom implementation. Spark will try each class specified until one of them returns the resource information for that resource. It tries the discovery script last if none of the plugins return information for that resource.
Default:
"org.apache.spark.resource.ResourceDiscoveryScriptPlugin"
-
-
expand_morelink
The amount of memory to be allocated to PySpark in each executor, in MiB unless otherwise specified.
If set, PySpark memory for an executor will be limited to this amount. If not set, Spark will not limit Python's memory use and it is up to the application to avoid exceeding the overhead memory space shared with other non-JVM processes. When PySpark is run in YARN or Kubernetes, this memory is added to executor resource requests. Note: This feature is dependent on Python's `resource` module; therefore, the behaviors and limitations are inherited. For instance, Windows does not support resource limiting and actual resource is not limited on MacOS.
Default:
null
-
expand_morelink
Amount of additional memory to be allocated per executor process, in MiB unless otherwise specified.
This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the executor size (typically 6-10%). This option is currently supported on YARN and Kubernetes. Note: Additional memory includes PySpark executor memory (when
spark.executor.pyspark.memory
is not configured) and memory used by other non-executor processes running in the same container. The maximum memory size of container to running executor is determined by the sum ofspark.executor.memoryOverhead
,spark.executor.memory
,spark.memory.offHeap.size
andspark.executor.pyspark.memory
.Default: executorMemory * spark.executor.memoryOverheadFactor, with minimum of 384
-
expand_morelink
Fraction of executor memory to be allocated as additional non-heap memory per executor process.
This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the container size. This value defaults to 0.10 except for Kubernetes non-JVM jobs, which defaults to 0.40. This is done as non-JVM tasks need more non-JVM heap space and such tasks commonly fail with "Memory Overhead Exceeded" errors. This preempts this error with a higher default. This value is ignored if
spark.executor.memoryOverhead
is set directly.Default:
0.1
-
expand_morelink
Amount of a particular resource type to use per executor process.
If this is used, you must also specify the
spark.executor.resource.{resourceName}.discoveryScript
for the executor to find the resource on startup.Default:
0
-
expand_morelink
A script for the executor to run to discover a particular resource type.
This should write to STDOUT a JSON string in the format of the ResourceInformation class. This has a name and an array of addresses.
Default:
null
-
expand_morelink
Vendor of the resources to use for the executors.
This option is currently only supported on Kubernetes and is actually both the vendor and domain following the Kubernetes device plugin naming convention. (e.g. For GPUs on Kubernetes this config would be set to nvidia.com or amd.com)
Default:
null
-
expand_morelink
A comma-separated list of classes that implement
SparkListener
; when initializing SparkContext, instances of these classes will be created and registered with Spark's listener bus.If a class has a single-argument constructor that accepts a SparkConf, that constructor will be called; otherwise, a zero-argument constructor will be called. If no valid constructor can be found, the SparkContext creation will fail with an exception.
Default:
null
-
expand_morelink
Directory to use for "scratch" space in Spark, including map output files and RDDs that get stored on disk.
This should be on a fast, local disk in your system. It can also be a comma-separated list of multiple directories on different disks.
Note: This will be overridden by SPARK_LOCAL_DIRS (Standalone), MESOS_SANDBOX (Mesos) or LOCAL_DIRS (YARN) environment variables set by the cluster manager.Default:
"/tmp"
-
-
expand_morelink
The cluster manager to connect to.
See the list of allowed master URL's.
Default:
null
-
-
expand_morelink
Application information that will be written into Yarn RM log/HDFS audit log when running on Yarn/HDFS.
Its length depends on the Hadoop configuration
hadoop.caller.context.max.size
. It should be concise, and typically can have up to 50 characters.Default:
null
-
-
expand_morelink
Base directory in which Spark driver logs are synced, if
spark.driver.log.persistToDfs.enabled
is true.Within this base directory, each application logs the driver logs to an application specific file. Users may want to set this to a unified location like an HDFS directory so driver log files can be persisted for later usage. This directory should allow any Spark user to read/write files and the Spark History Server user to delete files. Additionally, older logs from this directory are cleaned by the Spark History Server if
spark.history.fs.driverlog.cleaner.enabled
is true and, if they are older than max age configured by settingspark.history.fs.driverlog.cleaner.maxAge
.Default:
null
-
expand_morelink
If true, spark application running in client mode will write driver logs to a persistent storage, configured in
spark.driver.log.dfsDir
.If
spark.driver.log.dfsDir
is not configured, driver logs will not be persisted. Additionally, enable the cleaner by settingspark.history.fs.driverlog.cleaner.enabled
to true in Spark History Server.Default:
false
-
expand_morelink
The layout for the driver logs that are synced to
spark.driver.log.dfsDir
.If this is not configured, it uses the layout for the first appender defined in log4j2.properties. If that is also not configured, driver logs use the default layout.
Default: %d{yy/MM/dd HH:mm:ss.SSS} %t %p %c{1}: %m%n%ex
-
expand_morelink
Whether to allow driver logs to use erasure coding.
On HDFS, erasure coded files will not update as quickly as regular replicated files, so they make take longer to reflect changes written by the application. Note that even if this is true, Spark will still not force the file to use erasure coding, it will simply use file system defaults.
Default:
false
-
expand_morelink
Extra classpath entries to prepend to the classpath of the driver.
Note: In client mode, this config must not be set through theSparkConf
directly in your application, because the driver JVM has already started at that point. Instead, please set this through the--driver-class-path
command line option or in your default properties file.Default:
null
-
expand_morelink
A string of default JVM options to prepend to
spark.driver.extraJavaOptions
.This is intended to be set by administrators.
For instance, GC settings or other logging. Note that it is illegal to set maximum heap size (-Xmx) settings with this option. Maximum heap size settings can be set with
spark.driver.memory
in the cluster mode and through the--driver-memory
command line option in the client mode.
Note: In client mode, this config must not be set through theSparkConf
directly in your application, because the driver JVM has already started at that point. Instead, please set this through the--driver-java-options
command line option or in your default properties file.Default:
null
-
expand_morelink
A string of extra JVM options to pass to the driver.
This is intended to be set by users.
For instance, GC settings or other logging. Note that it is illegal to set maximum heap size (-Xmx) settings with this option. Maximum heap size settings can be set with
spark.driver.memory
in the cluster mode and through the--driver-memory
command line option in the client mode.
Note: In client mode, this config must not be set through theSparkConf
directly in your application, because the driver JVM has already started at that point. Instead, please set this through the--driver-java-options
command line option or in your default properties file.spark.driver.defaultJavaOptions
will be prepended to this configuration.Default:
null
-
expand_morelink
Set a special library path to use when launching the driver JVM.
Note: In client mode, this config must not be set through theSparkConf
directly in your application, because the driver JVM has already started at that point. Instead, please set this through the--driver-library-path
command line option or in your default properties file.Default:
null
-
expand_morelink
(Experimental) Whether to give user-added jars precedence over Spark's own jars when loading classes in the driver.
This feature can be used to mitigate conflicts between Spark's dependencies and user dependencies. It is currently an experimental feature.
This is used in cluster mode only.
Default:
false
-
-
expand_morelink
A string of default JVM options to prepend to
spark.executor.extraJavaOptions
.This is intended to be set by administrators.
For instance, GC settings or other logging. Note that it is illegal to set Spark properties or maximum heap size (-Xmx) settings with this option. Spark properties should be set using a SparkConf object or the spark-defaults.conf file used with the spark-submit script. Maximum heap size settings can be set with spark.executor.memory.
The following symbols, if present will be interpolated: {{APP_ID}} will be replaced by application ID and {{EXECUTOR_ID}} will be replaced by executor ID. For example, to enable verbose gc logging to a file named for the executor ID of the app in /tmp, pass a 'value' of:
-verbose:gc -Xloggc:/tmp/{{APP_ID}}-{{EXECUTOR_ID}}.gc
Default:
null
-
expand_morelink
A string of extra JVM options to pass to executors.
This is intended to be set by users.
For instance, GC settings or other logging. Note that it is illegal to set Spark properties or maximum heap size (-Xmx) settings with this option. Spark properties should be set using a SparkConf object or the spark-defaults.conf file used with the spark-submit script. Maximum heap size settings can be set with spark.executor.memory.
The following symbols, if present will be interpolated: {{APP_ID}} will be replaced by application ID and {{EXECUTOR_ID}} will be replaced by executor ID. For example, to enable verbose gc logging to a file named for the executor ID of the app in /tmp, pass a 'value' of:
-verbose:gc -Xloggc:/tmp/{{APP_ID}}-{{EXECUTOR_ID}}.gc
spark.executor.defaultJavaOptions
will be prepended to this configuration.Default:
null
-
-
-
-
-
expand_morelink
Set the strategy of rolling of executor logs.
By default it is disabled. It can be set to "time" (time-based rolling) or "size" (size-based rolling). For "time", use
spark.executor.logs.rolling.time.interval
to set the rolling interval. For "size", usespark.executor.logs.rolling.maxSize
to set the maximum file size for rolling.Default:
null
-
expand_morelink
Set the time interval by which the executor logs will be rolled over.
Rolling is disabled by default. Valid values are
daily
,hourly
,minutely
or any interval in seconds. Seespark.executor.logs.rolling.maxRetainedFiles
for automatic cleaning of old logs.Default:
"daily"
-
-
-
expand_morelink
Regex to decide which Spark configuration properties and environment variables in driver and executor environments contain sensitive information.
When this regex matches a property key or value, the value is redacted from the environment UI and various logs like YARN and event logs.
Default:
"(?i)secret|password|token"
-
expand_morelink
Enable profiling in Python worker, the profile result will show up by
sc.show_profiles()
, or it will be displayed before the driver exits.It also can be dumped into disk by
sc.dump_profiles(path)
. If some of the profile results had been displayed manually, they will not be displayed automatically before driver exiting.By default the
pyspark.profiler.BasicProfiler
will be used, but this can be overridden by passing a profiler class in as a parameter to theSparkContext
constructor.Default:
false
-
expand_morelink
The directory which is used to dump the profile result before driver exiting.
The results will be dumped as separated file for each RDD. They can be loaded by
pstats.Stats()
. If this is specified, the profile result will not be displayed automatically.Default:
null
-
expand_morelink
Amount of memory to use per python worker process during aggregation, in the same format as JVM memory strings with a size unit suffix ("k", "m", "g" or "t") (e.g.
512m
,2g
). If the memory used during aggregation goes above this amount, it will spill the data into disks.Default:
512.mib
-
expand_morelink
Reuse Python worker or not.
If yes, it will use a fixed number of Python workers, does not need to fork() a Python process for every task. It will be very useful if there is a large broadcast, then the broadcast will not need to be transferred from JVM to Python worker for every task.
Default:
true
-
-
-
-
expand_morelink
Comma-separated list of Maven coordinates of jars to include on the driver and executor classpaths.
The coordinates should be groupId:artifactId:version. If
spark.jars.ivySettings
is given artifacts will be resolved according to the configuration in the file, otherwise artifacts will be searched for in the local maven repo, then maven central and finally any additional remote repositories given by the command-line option--repositories
. For more details, see Advanced Dependency Management.Default:
null
-
-
-
expand_morelink
Path to an Ivy settings file to customize resolution of jars specified using
spark.jars.packages
instead of the built-in defaults, such as maven central.Additional repositories given by the command-line option
--repositories
orspark.jars.repositories
will also be included. Useful for allowing Spark to resolve artifacts from behind a firewall e.g. via an in-house artifact server like Artifactory. Details on the settings file format can be found at Settings Files. Only paths withfile://
scheme are supported. Paths without a scheme are assumed to have afile://
scheme.
When running in YARN cluster mode, this file will also be localized to the remote driver for dependency resolution within
SparkContext#addJar
Default:
null
-
-
expand_morelink
Comma-separated list of archives to be extracted into the working directory of each executor.
.jar, .tar.gz, .tgz and .zip are supported. You can specify the directory name to unpack via adding
#
after the file name to unpack, for example,file.zip#directory
. This configuration is experimental.Default:
null
-
-
-
expand_morelink
Maximum size of map outputs to fetch simultaneously from each reduce task, in MiB unless otherwise specified.
Since each output requires us to create a buffer to receive it, this represents a fixed memory overhead per reduce task, so keep it small unless you have a large amount of memory.
Default:
48.mib
-
expand_morelink
This configuration limits the number of remote requests to fetch blocks at any given point.
When the number of hosts in the cluster increase, it might lead to very large number of inbound connections to one or more nodes, causing the workers to fail under load. By allowing it to limit the number of fetch requests, this scenario can be mitigated.
Default:
2147483647
-
expand_morelink
This configuration limits the number of remote blocks being fetched per reduce task from a given host port.
When a large number of blocks are being requested from a given address in a single fetch or simultaneously, this could crash the serving executor or Node Manager. This is especially useful to reduce the load on the Node Manager when external shuffle is enabled. You can mitigate this issue by setting it to a lower value.
Default:
2147483647
-
-
-
expand_morelink
-
expand_morelink
(Netty only) Connections between hosts are reused in order to reduce connection buildup for large clusters.
For clusters with many hard disks and few hosts, this may result in insufficient concurrency to saturate all disks, and so users may consider increasing this value.
Default:
1
-
expand_morelink
(Netty only) Off-heap buffers are used to reduce garbage collection during shuffle and cache block transfer.
For environments where off-heap memory is tightly limited, users may wish to turn this off to force all allocations from Netty to be on-heap.
Default:
true
-
-
expand_morelink
Length of the accept queue for the shuffle service.
For large applications, this value may need to be increased, so that incoming connections are not dropped if the service cannot keep up with a large number of connections arriving in a short period of time. This needs to be configured wherever the shuffle service itself is running, which may be outside of the application (see
spark.shuffle.service.enabled
option below). If set below 1, will fallback to OS default defined by Netty'sio.netty.util.NetUtil#SOMAXCONN
.Default:
-1
-
expand_morelink
Timeout for the established connections between shuffle servers and clients to be marked as idled and closed if there are still outstanding fetch requests but no traffic no the channel for at least `connectionTimeout`.
Default: value of spark.network.timeout
-
expand_morelink
Enables the external shuffle service.
This service preserves the shuffle files written by executors e.g. so that executors can be safely removed, or so that shuffle fetches can continue in the event of executor failure. The external shuffle service must be set up in order to enable it. See dynamic allocation configuration and setup documentation for more information.
Default:
false
-
-
-
expand_morelink
Whether to use the ExternalShuffleService for deleting shuffle blocks for deallocated executors when the shuffle is no longer needed.
Without this enabled, shuffle data on executors that are deallocated will remain on disk until the application ends.
Default:
false
-
expand_morelink
The max number of chunks allowed to be transferred at the same time on shuffle service.
Note that new incoming connections will be closed when the max number is hit. The client will retry according to the shuffle retry configs (see
spark.shuffle.io.maxRetries
andspark.shuffle.io.retryWait
), if those limits are reached the task will fail with fetch failure.Default:
"Long.MAX_VALUE"
-
-
-
expand_morelink
-
-
-
expand_morelink
Timeout for the established connections for fetching files in Spark RPC environments to be marked as idled and closed if there are still outstanding files being downloaded but no traffic no the channel for at least `connectionTimeout`.
Default: value of spark.network.timeout
-
expand_morelink
Whether to calculate the checksum of shuffle data.
If enabled, Spark will calculate the checksum values for each partition data within the map output file and store the values in a checksum file on the disk. When there's shuffle data corruption detected, Spark will try to diagnose the cause (e.g., network issue, disk issue, etc.) of the corruption by using the checksum file.
Default:
true
-
-
expand_morelink
Whether to use the ExternalShuffleService for fetching disk persisted RDD blocks.
In case of dynamic allocation if this feature is enabled executors having only disk persisted blocks are considered idle after
spark.dynamicAllocation.executorIdleTimeout
and will be released accordingly.Default:
false
-
-
-
-
expand_morelink
The codec to compress logged events.
By default, Spark provides four codecs:
lz4
,lzf
,snappy
, andzstd
. You can also use fully qualified class names to specify the codec, e.g.org.apache.spark.io.LZ4CompressionCodec
,org.apache.spark.io.LZFCompressionCodec
,org.apache.spark.io.SnappyCompressionCodec
, andorg.apache.spark.io.ZStdCompressionCodec
.Default:
"zstd"
-
expand_morelink
Whether to allow event logs to use erasure coding, or turn erasure coding off, regardless of filesystem defaults.
On HDFS, erasure coded files will not update as quickly as regular replicated files, so the application updates will take longer to appear in the History Server. Note that even if this is true, Spark will still not force the file to use erasure coding, it will simply use filesystem defaults.
Default:
false
-
expand_morelink
Base directory in which Spark events are logged, if
spark.eventLog.enabled
is true.Within this base directory, Spark creates a sub-directory for each application, and logs the events specific to the application in this directory. Users may want to set this to a unified location like an HDFS directory so history files can be read by the history server.
Default:
"file:///tmp/spark-events"
-
-
-
-
-
-
-
-
-
expand_morelink
How often to update live entities.
-1 means "never update" when replaying applications, meaning only the last write will happen. For live applications, this avoids a few operations that we can live without when rapidly processing incoming task events.
Default:
100.ms
-
-
-
-
-
-
expand_morelink
Enable running Spark Master as reverse proxy for worker and application UIs.
In this mode, Spark master will reverse proxy the worker and application UIs to enable access without requiring direct access to their hosts. Use it with caution, as worker and application UI will not be accessible directly, you will only be able to access them through spark master/proxy public URL. This setting affects all the workers and application UIs running in the cluster and must be set on all the workers, drivers and masters.
Default:
false
-
expand_morelink
If the Spark UI should be served through another front-end reverse proxy, this is the URL for accessing the Spark master UI through that reverse proxy.
This is useful when running proxy for authentication e.g. an OAuth proxy. The URL may contain a path prefix, like
http://mydomain.com/path/to/spark/
, allowing you to serve the UI for multiple Spark clusters and other web applications through the same virtual host and port. Normally, this should be an absolute URL including scheme (http/https), host and port. It is possible to specify a relative URL starting with "/" here. In this case, all URLs generated by the Spark UI and Spark REST APIs will be server-relative links -- this will still work, as the entire Spark UI is served through the same host and port. The setting affects link generation in the Spark UI, but the front-end reverse proxy is responsible for- stripping a path prefix before forwarding the request,
- rewriting redirects which point directly to the Spark master,
- redirecting access from
http://mydomain.com/path/to/spark
tohttp://mydomain.com/path/to/spark/
(trailing slash after path prefix); otherwise relative links on the master page do not work correctly.
This setting affects all the workers and application UIs running in the cluster and must be set identically on all the workers, drivers and masters. In is only effective when
spark.ui.reverseProxy
is turned on. This setting is not needed when the Spark master web UI is directly reachable.Default:
null
-
expand_morelink
Where to address redirects when Spark is running behind a proxy.
This will make Spark modify redirect responses so they point to the proxy server, instead of the Spark UI's own address. This should be only the address of the server, without any prefix paths for the application; the prefix should be set either by the proxy server itself (by adding the
X-Forwarded-Context
request header), or by setting the proxy base in the Spark app's configuration.Default:
null
-
expand_morelink
Show the progress bar in the console.
The progress bar shows the progress of stages that run for longer than 500ms. If multiple stages run at the same time, multiple progress bars will be displayed on the same line. Note: In shell environment, the default value of spark.ui.showConsoleProgress is true.
Default:
false
-
expand_morelink
Specifies custom spark executor log URL for supporting external log service instead of using cluster managers' application log URLs in Spark UI.
Spark will support some path variables via patterns which can vary on cluster manager. Please check the documentation for your cluster manager to see which patterns are supported, if any.
Please note that this configuration also replaces original log urls in event log, which will be also effective when accessing the application on history server. The new log urls must be permanent, otherwise you might have dead link for executor log urls.
For now, only YARN mode supports this configuration
Default:
null
-
-
-
-
-
-
expand_morelink
Comma separated list of filter class names to apply to the Spark Web UI.
The filter should be a standard javax servlet Filter.
Filter parameters can also be specified in the configuration, by setting config entries of the formspark.<class name of filter>.param.<param name>=<value>
For example:spark.ui.filters=com.test.filter1
spark.com.test.filter1.param.name1=foo
spark.com.test.filter1.param.name2=bar
Default:
null
-
-
-
-
-
-
-
-
expand_morelink
The codec used to compress internal data such as RDD partitions, event log, broadcast variables and shuffle outputs.
By default, Spark provides four codecs:
lz4
,lzf
,snappy
, andzstd
. You can also use fully qualified class names to specify the codec, e.g.org.apache.spark.io.LZ4CompressionCodec
,org.apache.spark.io.LZFCompressionCodec
,org.apache.spark.io.SnappyCompressionCodec
, andorg.apache.spark.io.ZStdCompressionCodec
.Default:
"lz4"
-
expand_morelink
-
expand_morelink
-
-
expand_morelink
Buffer size in bytes used in Zstd compression, in the case when Zstd compression codec is used.
Lowering this size will lower the shuffle memory usage when Zstd is used, but it might increase the compression cost because of excessive JNI call overhead.
Default:
32.kib
-
expand_morelink
If you use Kryo serialization, give a comma-separated list of custom class names to register with Kryo.
See the tuning guide for more details.
Default:
null
-
expand_morelink
Whether to track references to the same object when serializing data with Kryo, which is necessary if your object graphs have loops and useful for efficiency if they contain multiple copies of the same object.
Can be disabled to improve performance if you know this is not the case.
Default:
true
-
expand_morelink
Whether to require registration with Kryo.
If set to 'true', Kryo will throw an exception if an unregistered class is serialized. If set to false (the default), Kryo will write unregistered class names along with each object. Writing class names can cause significant performance overhead, so enabling this option can enforce strictly that a user has not omitted classes from registration.
Default:
false
-
expand_morelink
If you use Kryo serialization, give a comma-separated list of classes that register your custom classes with Kryo.
This property is useful if you need to register your classes in a custom way, e.g. to specify a custom field serializer. Otherwise
spark.kryo.classesToRegister
is simpler. It should be set to classes that extendKryoRegistrator
. See the tuning guide for more details.Default:
null
-
-
expand_morelink
Maximum allowable size of Kryo serialization buffer, in MiB unless otherwise specified.
This must be larger than any object you attempt to serialize and must be less than 2048m. Increase this if you get a "buffer limit exceeded" exception inside Kryo.
Default:
64.mib
-
-
expand_morelink
Whether to compress serialized RDD partitions (e.g.
for
StorageLevel.MEMORY_ONLY_SER
in Java and Scala orStorageLevel.MEMORY_ONLY
in Python). Can save substantial space at the cost of some extra CPU time. Compression will usespark.io.compression.codec
.Default:
false
-
expand_morelink
Class to use for serializing objects that will be sent over the network or need to be cached in serialized form.
The default of Java serialization works with any Serializable Java object but is quite slow, so we recommend using
org.apache.spark.serializer.KryoSerializer
and configuring Kryo serialization when speed is necessary. Can be any subclass oforg.apache.spark.Serializer
.Default: org.apache.spark.serializer. JavaSerializer
-
expand_morelink
When serializing using org.apache.spark.serializer.JavaSerializer, the serializer caches objects to prevent writing redundant data, however that stops garbage collection of those objects.
By calling 'reset' you flush that info from the serializer, and allow old objects to be collected. To turn off this periodic reset set it to -1. By default it will reset the serializer every 100 objects.
Default:
100
-
expand_morelink
Fraction of (heap space - 300MB) used for execution and storage.
The lower this is, the more frequently spills and cached data eviction occur. The purpose of this config is to set aside memory for internal metadata, user data structures, and imprecise size estimation in the case of sparse, unusually large records. Leaving this at the default value is recommended. For more detail, including important information about correctly tuning JVM garbage collection when increasing this value, see this description.
Default:
0.6
-
expand_morelink
Amount of storage memory immune to eviction, expressed as a fraction of the size of the region set aside by
spark.memory.fraction
.The higher this is, the less working memory may be available to execution and tasks may spill to disk more often. Leaving this at the default value is recommended. For more detail, see this description.
Default:
0.5
-
-
expand_morelink
The absolute amount of memory which can be used for off-heap allocation, in bytes unless otherwise specified.
This setting has no impact on heap memory usage, so if your executors' total memory consumption must fit within some hard limit then be sure to shrink your JVM heap size accordingly. This must be set to a positive value when
spark.memory.offHeap.enabled=true
.Default:
0
-
expand_morelink
Enables proactive block replication for RDD blocks.
Cached RDD block replicas lost due to executor failures are replenished if there are any existing available replicas. This tries to get the replication level of the block to the initial number.
Default:
false
-
expand_morelink
Controls how often to trigger a garbage collection.
This context cleaner triggers cleanups only when weak references are garbage collected. In long-running applications with large driver JVMs, where there is little memory pressure on the driver, this may happen very occasionally or not at all. Not cleaning at all may lead to executors running out of disk space after a while.
Default:
30.min
-
-
-
-
-
expand_morelink
Size of each piece of a block for
TorrentBroadcastFactory
, in KiB unless otherwise specified.Too large a value decreases parallelism during broadcast (makes it slower); however, if it is too small,
BlockManager
might take a performance hit.Default:
4.mib
-
expand_morelink
Whether to enable checksum for broadcast.
If enabled, broadcasts will include a checksum, which can help detect corrupted blocks, at the cost of computing and sending a little more data. It's possible to disable it if the network has other mechanisms to guarantee data won't be corrupted during broadcast.
Default:
true
-
expand_morelink
The number of cores to use on each executor.
In standalone and Mesos coarse-grained modes, for more detail, see this description.
Default:
1 in YARN mode, all the available cores on the worker in standalone and Mesos coarse-grained modes.
-
expand_morelink
Default number of partitions in RDDs returned by transformations like
join
,reduceByKey
, andparallelize
when not set by user.Default: For distributed shuffle operations like reduceByKey and join, the largest number of partitions in a parent RDD. For operations like parallelize with no parent RDDs, it depends on the cluster manager: Local mode: number of cores on the local machine Mesos fine grained mode: 8 Others: total number of cores on all executor nodes or 2, whichever is larger
-
expand_morelink
Interval between each executor's heartbeats to the driver.
Heartbeats let the driver know that the executor is still alive and update it with metrics for in-progress tasks. spark.executor.heartbeatInterval should be significantly less than spark.network.timeout
Default:
10.s
-
-
expand_morelink
If set to true (default), file fetching will use a local cache that is shared by executors that belong to the same application, which can improve task launching performance when running many executors on the same host.
If set to false, these caching optimizations will be disabled and all executors will fetch their own copies of files. This optimization may be disabled in order to use Spark local directories that reside on NFS filesystems (see SPARK-6313 for more details).
Default:
true
-
-
-
expand_morelink
The estimated cost to open a file, measured by the number of bytes could be scanned at the same time.
This is used when putting multiple files into a partition. It is better to overestimate, then the partitions with small files will be faster than partitions with bigger files.
Default:
4194304 (4 MiB)
-
expand_morelink
If set to true, clones a new Hadoop
Configuration
object for each task.This option should be enabled to work around
Configuration
thread-safety issues (see SPARK-2546 for more details). This is disabled by default in order to avoid unexpected performance regressions for jobs that are not affected by these issues.Default:
false
-
expand_morelink
If set to true, validates the output specification (e.g.
checking if the output directory already exists) used in saveAsHadoopFile and other variants. This can be disabled to silence exceptions due to pre-existing output directories. We recommend that users do not disable this except if trying to achieve compatibility with previous versions of Spark. Simply use Hadoop's FileSystem API to delete output directories by hand. This setting is ignored for jobs generated through Spark Streaming's StreamingContext, since data may need to be rewritten to pre-existing output directories during checkpoint recovery.
Default:
true
-
expand_morelink
Size of a block above which Spark memory maps when reading a block from disk.
Default unit is bytes, unless specified otherwise. This prevents Spark from memory mapping very small blocks. In general, memory mapping has high overhead for blocks close to or below the page size of the operating system.
Default:
2.mib
-
-
expand_morelink
Whether to write per-stage peaks of executor metrics (for each executor) to the event log.
Note: The metrics are polled (collected) and sent in the executor heartbeat, and this is always done; this configuration is only to determine if aggregated metric peaks are written to the event log.
Default:
false
-
-
expand_morelink
How often to collect executor metrics (in milliseconds).
If 0, the polling is done on executor heartbeats (thus at the heartbeat interval, specified by
spark.executor.heartbeatInterval
). If positive, the polling is done at this interval.Default:
0
-
expand_morelink
Maximum message size (in MiB) to allow in "control plane" communication; generally only applies to map output size information sent between executors and the driver.
Increase this if you are running jobs with many thousands of map and reduce tasks and see messages about the RPC message size.
Default:
128
-
-
-
expand_morelink
Hostname or IP address where to bind listening sockets.
This config overrides the SPARK_LOCAL_IP environment variable (see below).
It also allows a different address from the local one to be advertised to executors or external systems. This is useful, for example, when running containers with bridged networking. For this to properly work, the different ports used by the driver (RPC, block manager and UI) need to be forwarded from the container's host.Default: (value of spark.driver.host)
-
-
-
-
expand_morelink
-
expand_morelink
If enabled then off-heap buffer allocations are preferred by the shared allocators.
Off-heap buffers are used to reduce garbage collection during shuffle and cache block transfer. For environments where off-heap memory is tightly limited, users may wish to turn this off to force all allocations to be on-heap.
Default:
true
-
expand_morelink
Maximum number of retries when binding to a port before giving up.
When a port is given a specific value (non 0), each subsequent retry will increment the port used in the previous attempt by 1 before retrying. This essentially allows it to try a range of ports from the start port specified to port + maxRetries.
Default:
16
-
-
-
-
-
expand_morelink
Remote block will be fetched to disk when size of the block is above this threshold in bytes.
This is to avoid a giant request takes too much memory. Note this configuration will affect both shuffle fetch and block manager remote block fetch. For users who enabled external shuffle service, this feature can only work when external shuffle service is at least 2.3.0.
Default:
200.mib
-
-
expand_morelink
When running on a standalone deploy cluster or a Mesos cluster in "coarse-grained" sharing mode, the maximum amount of CPU cores to request for the application from across the cluster (not from each machine).
If not set, the default will be
spark.deploy.defaultCores
on Spark's standalone cluster manager, or infinite (all available cores) on Mesos.Default: (not set)
-
expand_morelink
How long to wait to launch a data-local task before giving up and launching it on a less-local node.
The same wait will be used to step through multiple locality levels (process-local, node-local, rack-local and then any). It is also possible to customize the waiting time for each level by setting
spark.locality.wait.node
, etc. You should increase this setting if your tasks are long and see poor locality, but the default usually works well.Default:
3.s
-
-
-
-
-
expand_morelink
The minimum ratio of registered resources (registered resources / total expected resources) (resources are executors in yarn mode and Kubernetes mode, CPU cores in standalone mode and Mesos coarse-grained mode ['spark.cores.max' value is total expected resources for Mesos coarse-grained mode] ) to wait for before scheduling begins.
Specified as a double between 0.0 and 1.0. Regardless of whether the minimum ratio of resources has been reached, the maximum amount of time it will wait before scheduling begins is controlled by config
spark.scheduler.maxRegisteredResourcesWaitingTime
.Default: 0.8 for KUBERNETES mode; 0.8 for YARN mode; 0.0 for standalone mode and Mesos coarse-grained mode
-
expand_morelink
The scheduling mode between jobs submitted to the same SparkContext.
Can be set to
FAIR
to use fair sharing instead of queueing jobs one after another. Useful for multi-user services.Default:
"FIFO"
-
-
expand_morelink
The default capacity for event queues.
Spark will try to initialize an event queue using capacity specified by `spark.scheduler.listenerbus.eventqueue.queueName.capacity` first. If it's not configured, Spark will use the default capacity specified by this config. Note that capacity must be greater than 0. Consider increasing value (e.g. 20000) if listener events are dropped. Increasing this value may result in the driver using more memory.
Default:
10000
-
expand_morelink
Capacity for shared event queue in Spark listener bus, which hold events for external listener(s) that register to the listener bus.
Consider increasing value, if the listener events corresponding to shared queue are dropped. Increasing this value may result in the driver using more memory.
Default:
spark.scheduler.listenerbus.eventqueue.capacity
-
expand_morelink
Capacity for appStatus event queue, which hold events for internal application status listeners.
Consider increasing value, if the listener events corresponding to appStatus queue are dropped. Increasing this value may result in the driver using more memory.
Default:
spark.scheduler.listenerbus.eventqueue.capacity
-
expand_morelink
Capacity for executorManagement event queue in Spark listener bus, which hold events for internal executor management listeners.
Consider increasing value if the listener events corresponding to executorManagement queue are dropped. Increasing this value may result in the driver using more memory.
Default:
spark.scheduler.listenerbus.eventqueue.capacity
-
expand_morelink
Capacity for eventLog queue in Spark listener bus, which hold events for Event logging listeners that write events to eventLogs.
Consider increasing value if the listener events corresponding to eventLog queue are dropped. Increasing this value may result in the driver using more memory.
Default:
spark.scheduler.listenerbus.eventqueue.capacity
-
expand_morelink
Capacity for streams queue in Spark listener bus, which hold events for internal streaming listener.
Consider increasing value if the listener events corresponding to streams queue are dropped. Increasing this value may result in the driver using more memory.
Default:
spark.scheduler.listenerbus.eventqueue.capacity
-
expand_morelink
If set to "true", Spark will merge ResourceProfiles when different profiles are specified in RDDs that get combined into a single stage.
When they are merged, Spark chooses the maximum of each resource and creates a new ResourceProfile. The default of false results in Spark throwing an exception if multiple different ResourceProfiles are found in RDDs going into the same stage.
Default:
false
-
-
expand_morelink
If set to "true", prevent Spark from scheduling tasks on executors that have been excluded due to too many task failures.
The algorithm used to exclude executors and nodes can be further controlled by the other "spark.excludeOnFailure" configuration options.
Default:
false
-
-
-
-
-
-
expand_morelink
(Experimental) How many different tasks must fail on one executor, in successful task sets, before the executor is excluded for the entire application.
Excluded executors will be automatically added back to the pool of available resources after the timeout specified by
spark.excludeOnFailure.timeout
. Note that with dynamic allocation, though, the executors may get marked as idle and be reclaimed by the cluster manager.Default:
2
-
expand_morelink
(Experimental) How many different executors must be excluded for the entire application, before the node is excluded for the entire application.
Excluded nodes will be automatically added back to the pool of available resources after the timeout specified by
spark.excludeOnFailure.timeout
. Note that with dynamic allocation, though, the executors on the node may get marked as idle and be reclaimed by the cluster manager.Default:
2
-
expand_morelink
(Experimental) If set to "true", allow Spark to automatically kill the executors when they are excluded on fetch failure or excluded for the entire application, as controlled by spark.killExcludedExecutors.application.*.
Note that, when an entire node is added excluded, all of the executors on that node will be killed.
Default:
false
-
-
-
-
-
-
-
expand_morelink
Task duration after which scheduler would try to speculative run the task.
If provided, tasks would be speculatively run if current stage contains less tasks than or equal to the number of slots on a single executor and the task is taking longer time than the threshold. This config helps speculate stage with very few tasks. Regular speculation configs may also apply if the executor slots are large enough. E.g. tasks might be re-launched if there are enough successful runs even though the threshold hasn't been reached. The number of slots is computed based on the conf values of spark.executor.cores and spark.task.cpus minimum 1. Default unit is bytes, unless otherwise specified.
Default:
null
-
-
expand_morelink
Amount of a particular resource type to allocate for each task, note that this can be a double.
If this is specified you must also provide the executor config
spark.executor.resource.{resourceName}.amount
and any corresponding discovery configs so that your executors are created with that resource type. In addition to whole amounts, a fractional amount (for example, 0.25, which means 1/4th of a resource) may be specified. Fractional amounts must be less than or equal to 0.5, or in other words, the minimum amount of resource sharing is 2 tasks per resource. Additionally, fractional amounts are floored in order to assign resource slots (e.g. a 0.2222 configuration, or 1/0.2222 slots will become 4 tasks/resource, not 5).Default:
1
-
expand_morelink
Number of continuous failures of any particular task before giving up on the job.
The total number of failures spread across different tasks will not cause the job to fail; a particular task has to fail this number of attempts continuously. If any attempt succeeds, the failure count for the task will be reset. Should be greater than or equal to 1. Number of allowed retries = this value - 1.
Default:
4
-
expand_morelink
Enables monitoring of killed / interrupted tasks.
When set to true, any task which is killed will be monitored by the executor until that task actually finishes executing. See the other
spark.task.reaper.*
configurations for details on how to control the exact behavior of this monitoring. When set to false (the default), task killing will use an older code path which lacks such monitoring.Default:
false
-
expand_morelink
When
spark.task.reaper.enabled = true
, this setting controls the frequency at which executors will poll the status of killed tasks.If a killed task is still running when polled then a warning will be logged and, by default, a thread-dump of the task will be logged (this thread dump can be disabled via the
spark.task.reaper.threadDump
setting, which is documented below).Default:
10.s
-
-
expand_morelink
When
spark.task.reaper.enabled = true
, this setting specifies a timeout after which the executor JVM will kill itself if a killed task has not stopped running.The default value, -1, disables this mechanism and prevents the executor from self-destructing. The purpose of this setting is to act as a safety-net to prevent runaway noncancellable tasks from rendering an executor unusable.
Default:
-1
-
-
expand_morelink
The timeout in seconds for each
barrier()
call from a barrier task.If the coordinator didn't receive all the sync messages from barrier tasks within the configured time, throw a SparkException to fail all the tasks. The default value is set to 31536000(3600 * 24 * 365) so the
barrier()
call shall wait for one year.Default:
365.0
-
expand_morelink
Time in seconds to wait between a max concurrent tasks check failure and the next check.
A max concurrent tasks check ensures the cluster can launch more concurrent tasks than required by a barrier stage on job submitted. The check can fail in case a cluster has just started and not enough executors have registered, so we wait for a little while and try to perform the check again. If the check fails more than a configured max failure times for a job then fail current job submission. Note this config only applies to jobs that contain one or more barrier stages, we won't perform the check on non-barrier jobs.
Default:
15.s
-
expand_morelink
Number of max concurrent tasks check failures allowed before fail a job submission.
A max concurrent tasks check ensures the cluster can launch more concurrent tasks than required by a barrier stage on job submitted. The check can fail in case a cluster has just started and not enough executors have registered, so we wait for a little while and try to perform the check again. If the check fails more than a configured max failure times for a job then fail current job submission. Note this config only applies to jobs that contain one or more barrier stages, we won't perform the check on non-barrier jobs.
Default:
40
-
expand_morelink
Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload.
For more detail, see the description here.
This requires
spark.shuffle.service.enabled
orspark.dynamicAllocation.shuffleTracking.enabled
to be set. The following configurations are also relevant:spark.dynamicAllocation.minExecutors
,spark.dynamicAllocation.maxExecutors
, andspark.dynamicAllocation.initialExecutors
spark.dynamicAllocation.executorAllocationRatio
Default:
false
-
expand_morelink
If dynamic allocation is enabled and an executor has been idle for more than this duration, the executor will be removed.
For more detail, see this description.
Default:
60.s
-
expand_morelink
If dynamic allocation is enabled and an executor which has cached data blocks has been idle for more than this duration, the executor will be removed.
For more details, see this description.
Default:
infinity
-
expand_morelink
Initial number of executors to run if dynamic allocation is enabled.
If `--num-executors` (or `spark.executor.instances`) is set and larger than this value, it will be used as the initial number of executors.
Default:
spark.dynamicAllocation.minExecutors
-
-
-
expand_morelink
By default, the dynamic allocation will request enough executors to maximize the parallelism according to the number of tasks to process.
While this minimizes the latency of the job, with small tasks this setting can waste a lot of resources due to executor allocation overhead, as some executor might not even do any work. This setting allows to set a ratio that will be used to reduce the number of executors w.r.t. full parallelism. Defaults to 1.0 to give maximum parallelism. 0.5 will divide the target number of executors by 2 The target number of executors computed by the dynamicAllocation can still be overridden by the
spark.dynamicAllocation.minExecutors
andspark.dynamicAllocation.maxExecutors
settingsDefault:
1
-
expand_morelink
If dynamic allocation is enabled and there have been pending tasks backlogged for more than this duration, new executors will be requested.
For more detail, see this description.
Default:
1.s
-
expand_morelink
Same as
spark.dynamicAllocation.schedulerBacklogTimeout
, but used only for subsequent executor requests.For more detail, see this description.
Default:
"schedulerBacklogTimeout"
-
expand_morelink
-
expand_morelink
When shuffle tracking is enabled, controls the timeout for executors that are holding shuffle data.
The default value means that Spark will rely on the shuffles being garbage collected to be able to release executors. If for some reason garbage collection is not cleaning up shuffles quickly enough, this option can be used to control when to time out executors even when they are storing shuffle data.
Default:
infinity
-
-
-
-
expand_morelink
Enables or disables Spark Streaming's internal backpressure mechanism (since 1.5).
This enables the Spark Streaming to control the receiving rate based on the current batch scheduling delays and processing times so that the system receives only as fast as the system can process. Internally, this dynamically sets the maximum receiving rate of receivers. This rate is upper bounded by the values
spark.streaming.receiver.maxRate
andspark.streaming.kafka.maxRatePerPartition
if they are set (see below).Default:
false
-
-
expand_morelink
Interval at which data received by Spark Streaming receivers is chunked into blocks of data before storing them in Spark.
Minimum recommended - 50 ms. See the performance tuning section in the Spark Streaming programming guide for more details.
Default:
200.ms
-
expand_morelink
Maximum rate (number of records per second) at which each receiver will receive data.
Effectively, each stream will consume at most this number of records per second. Setting this configuration to 0 or a negative number will put no limit on the rate. See the deployment guide in the Spark Streaming programming guide for mode details.
Default:
null
-
expand_morelink
Enable write-ahead logs for receivers.
All the input data received through receivers will be saved to write-ahead logs that will allow it to be recovered after driver failures. See the deployment guide in the Spark Streaming programming guide for more details.
Default:
false
-
expand_morelink
Force RDDs generated and persisted by Spark Streaming to be automatically unpersisted from Spark's memory.
The raw input data received by Spark Streaming is also automatically cleared. Setting this to false will allow the raw data and persisted RDDs to be accessible outside the streaming application as they will not be cleared automatically. But it comes at the cost of higher memory usage in Spark.
Default:
true
-
-
expand_morelink
Maximum rate (number of records per second) at which data will be read from each Kafka partition when using the new Kafka direct stream API.
See the Kafka Integration guide for more details.
Default:
null
-
-
expand_morelink
Whether to close the file after writing a write-ahead log record on the driver.
Set this to 'true' when you want to use S3 (or any file system that does not support flushing) for the metadata WAL on the driver.
Default:
false
-
expand_morelink
Whether to close the file after writing a write-ahead log record on the receivers.
Set this to 'true' when you want to use S3 (or any file system that does not support flushing) for the data WAL on the receivers.
Default:
false
-
-
-
-
expand_morelink
Executable for executing sparkR shell in client modes for driver.
Ignored in cluster modes. It is the same as environment variable
SPARKR_DRIVER_R
, but take precedence over it.spark.r.shell.command
is used for sparkR shell whilespark.r.driver.command
is used for running R script.Default:
"R"
-
-
-
-
-
-
-
expand_morelink
Class name of the implementation of
MergedShuffleFileManager
that manages push-based shuffle.This acts as a server side config to disable or enable push-based shuffle. By default, push-based shuffle is disabled at the server side.
To enable push-based shuffle on the server side, set this config to
org.apache.spark.network.shuffle.RemoteBlockPushResolver
Default: org.apache.spark.network.shuffle.NoOpMergedShuffleFileManager
-
expand_morelink
The minimum size of a chunk when dividing a merged shuffle file into multiple chunks during push-based shuffle.
A merged shuffle file consists of multiple small shuffle blocks. Fetching the complete merged shuffle file in a single disk I/O increases the memory requirements for both the clients and the external shuffle services. Instead, the external shuffle service serves the merged file in
MB-sized chunks
.This configuration controls how big a chunk can get. A corresponding index file for each merged shuffle file will be generated indicating chunk boundaries.
Setting this too high would increase the memory requirements on both the clients and the external shuffle service.
Setting this too low would increase the overall number of RPC requests to external shuffle service unnecessarily.
Default:
2.mib
-
expand_morelink
-
-
expand_morelink
The amount of time driver waits in seconds, after all mappers have finished for a given shuffle map stage, before it sends merge finalize requests to remote external shuffle services.
This gives the external shuffle services extra time to merge blocks. Setting this too long could potentially lead to performance regression.
Default:
10.s
-
expand_morelink
Maximum number of merger locations cached for push-based shuffle.
Currently, merger locations are hosts of external shuffle services responsible for handling pushed blocks, merging them and serving merged blocks for later shuffle fetch.
Default:
500
-
expand_morelink
Ratio used to compute the minimum number of shuffle merger locations required for a stage based on the number of partitions for the reducer stage.
For example, a reduce stage which has 100 partitions and uses the default value 0.05 requires at least 5 unique merger locations to enable push-based shuffle.
Default:
0.05
-
expand_morelink
The static threshold for number of shuffle push merger locations should be available in order to enable push-based shuffle for a stage.
Note this config works in conjunction with
spark.shuffle.push.mergersMinThresholdRatio
. Maximum ofspark.shuffle.push.mergersMinStaticThreshold
andspark.shuffle.push.mergersMinThresholdRatio
ratio number of mergers needed to enable push-based shuffle for a stage. For example: with 1000 partitions for the child stage with spark.shuffle.push.mergersMinStaticThreshold as 5 and spark.shuffle.push.mergersMinThresholdRatio set to 0.05, we would need at least 50 mergers to enable push-based shuffle for that stage.Default:
5
-
expand_morelink
The max size of an individual block to push to the remote external shuffle services.
Blocks larger than this threshold are not pushed to be merged remotely. These shuffle blocks will be fetched in the original manner.
Setting this too high would result in more blocks to be pushed to remote external shuffle services but those are already efficiently fetched with the existing mechanisms resulting in additional overhead of pushing the large blocks to remote external shuffle services. It is recommended to set
spark.shuffle.push.maxBlockSizeToPush
lesser thanspark.shuffle.push.maxBlockBatchSize
config's value.Setting this too low would result in lesser number of blocks getting merged and directly fetched from mapper external shuffle service results in higher small random reads affecting overall disk I/O performance.
Default:
1.mib
-
expand_morelink
The max size of a batch of shuffle blocks to be grouped into a single push request.
Default is set to
3m
in order to keep it slightly higher thanspark.storage.memoryMapThreshold
default which is2m
as it is very likely that each batch of block gets memory mapped which incurs higher overhead.Default:
3.mib
-
-
Methods(show inherited)
-
-
-
-
-
-
-
-
-
expand_morelinkfunction
Returns the relative, descendent directory path between this module and
other
.Throws if no such path exists.
For example, if module
mod1
has path/dir1/mod1.pkl
, and modulemod2
has path/dir1/dir2/dir3/mod2.pkl
, thenmod1.relativePathTo(mod2)
will returnList("dir2", "dir3")
.A common use case is to compute the directory path between a template located at the root of a hierarchy (say
rootModule.pkl
) and the currently evaluated module (accessible via themodule
keyword):import "rootModule.pkl" // self-import path = rootModule.relativePathTo(module)
-
The output of this module.