Skip to content
Snippets Groups Projects
Commit eec4bd1a authored by Andrew Ash's avatar Andrew Ash Committed by Aaron Davidson
Browse files

Typo: Standlone -> Standalone

Author: Andrew Ash <andrew@andrewash.com>

Closes #601 from ash211/typo and squashes the following commits:

9cd43ac [Andrew Ash] Change docs references to metrics.properties, not metrics.conf
3813ff1 [Andrew Ash] Typo: mulitcast -> multicast
873bd2f [Andrew Ash] Typo: Standlone -> Standalone
parent 2414ed31
No related branches found
No related tags found
No related merge requests found
...@@ -67,7 +67,7 @@ ...@@ -67,7 +67,7 @@
# period 10 Poll period # period 10 Poll period
# unit seconds Units of poll period # unit seconds Units of poll period
# ttl 1 TTL of messages sent by Ganglia # ttl 1 TTL of messages sent by Ganglia
# mode multicast Ganglia network mode ('unicast' or 'mulitcast') # mode multicast Ganglia network mode ('unicast' or 'multicast')
# org.apache.spark.metrics.sink.JmxSink # org.apache.spark.metrics.sink.JmxSink
......
...@@ -19,7 +19,7 @@ You can access this interface by simply opening `http://<driver-node>:4040` in a ...@@ -19,7 +19,7 @@ You can access this interface by simply opening `http://<driver-node>:4040` in a
If multiple SparkContexts are running on the same host, they will bind to succesive ports If multiple SparkContexts are running on the same host, they will bind to succesive ports
beginning with 4040 (4041, 4042, etc). beginning with 4040 (4041, 4042, etc).
Spark's Standlone Mode cluster manager also has its own Spark's Standalone Mode cluster manager also has its own
[web UI](spark-standalone.html#monitoring-and-logging). [web UI](spark-standalone.html#monitoring-and-logging).
Note that in both of these UIs, the tables are sortable by clicking their headers, Note that in both of these UIs, the tables are sortable by clicking their headers,
...@@ -31,7 +31,7 @@ Spark has a configurable metrics system based on the ...@@ -31,7 +31,7 @@ Spark has a configurable metrics system based on the
[Coda Hale Metrics Library](http://metrics.codahale.com/). [Coda Hale Metrics Library](http://metrics.codahale.com/).
This allows users to report Spark metrics to a variety of sinks including HTTP, JMX, and CSV This allows users to report Spark metrics to a variety of sinks including HTTP, JMX, and CSV
files. The metrics system is configured via a configuration file that Spark expects to be present files. The metrics system is configured via a configuration file that Spark expects to be present
at `$SPARK_HOME/conf/metrics.conf`. A custom file location can be specified via the at `$SPARK_HOME/conf/metrics.properties`. A custom file location can be specified via the
`spark.metrics.conf` [configuration property](configuration.html#spark-properties). `spark.metrics.conf` [configuration property](configuration.html#spark-properties).
Spark's metrics are decoupled into different Spark's metrics are decoupled into different
_instances_ corresponding to Spark components. Within each instance, you can configure a _instances_ corresponding to Spark components. Within each instance, you can configure a
...@@ -54,7 +54,7 @@ Each instance can report to zero or more _sinks_. Sinks are contained in the ...@@ -54,7 +54,7 @@ Each instance can report to zero or more _sinks_. Sinks are contained in the
* `GraphiteSink`: Sends metrics to a Graphite node. * `GraphiteSink`: Sends metrics to a Graphite node.
The syntax of the metrics configuration file is defined in an example configuration file, The syntax of the metrics configuration file is defined in an example configuration file,
`$SPARK_HOME/conf/metrics.conf.template`. `$SPARK_HOME/conf/metrics.properties.template`.
# Advanced Instrumentation # Advanced Instrumentation
......
...@@ -10,7 +10,7 @@ In addition to running on the Mesos or YARN cluster managers, Spark also provide ...@@ -10,7 +10,7 @@ In addition to running on the Mesos or YARN cluster managers, Spark also provide
# Installing Spark Standalone to a Cluster # Installing Spark Standalone to a Cluster
To install Spark Standlone mode, you simply place a compiled version of Spark on each node on the cluster. You can obtain pre-built versions of Spark with each release or [build it yourself](index.html#building). To install Spark Standalone mode, you simply place a compiled version of Spark on each node on the cluster. You can obtain pre-built versions of Spark with each release or [build it yourself](index.html#building).
# Starting a Cluster Manually # Starting a Cluster Manually
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment