diff --git a/conf/metrics.properties.template b/conf/metrics.properties.template index 1c3d94e1b0831d69e4635e328e69a0511a94e3f4..30bcab0c9330221e65ef77a52db1333171d1ba09 100644 --- a/conf/metrics.properties.template +++ b/conf/metrics.properties.template @@ -67,7 +67,7 @@ # period 10 Poll period # unit seconds Units of poll period # ttl 1 TTL of messages sent by Ganglia -# mode multicast Ganglia network mode ('unicast' or 'mulitcast') +# mode multicast Ganglia network mode ('unicast' or 'multicast') # org.apache.spark.metrics.sink.JmxSink diff --git a/docs/monitoring.md b/docs/monitoring.md index 0d5eb7065e9f0a9ad7d12d5171b75310836346cb..e9b1d2b2f4ffbf629c685ff6ee64d77693560767 100644 --- a/docs/monitoring.md +++ b/docs/monitoring.md @@ -19,7 +19,7 @@ You can access this interface by simply opening `http://<driver-node>:4040` in a If multiple SparkContexts are running on the same host, they will bind to succesive ports beginning with 4040 (4041, 4042, etc). -Spark's Standlone Mode cluster manager also has its own +Spark's Standalone Mode cluster manager also has its own [web UI](spark-standalone.html#monitoring-and-logging). Note that in both of these UIs, the tables are sortable by clicking their headers, @@ -31,7 +31,7 @@ Spark has a configurable metrics system based on the [Coda Hale Metrics Library](http://metrics.codahale.com/). This allows users to report Spark metrics to a variety of sinks including HTTP, JMX, and CSV files. The metrics system is configured via a configuration file that Spark expects to be present -at `$SPARK_HOME/conf/metrics.conf`. A custom file location can be specified via the +at `$SPARK_HOME/conf/metrics.properties`. A custom file location can be specified via the `spark.metrics.conf` [configuration property](configuration.html#spark-properties). Spark's metrics are decoupled into different _instances_ corresponding to Spark components. Within each instance, you can configure a @@ -54,7 +54,7 @@ Each instance can report to zero or more _sinks_. Sinks are contained in the * `GraphiteSink`: Sends metrics to a Graphite node. The syntax of the metrics configuration file is defined in an example configuration file, -`$SPARK_HOME/conf/metrics.conf.template`. +`$SPARK_HOME/conf/metrics.properties.template`. # Advanced Instrumentation diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md index 3388c14ec4d4881869a18f63a570eb505c2b723a..51fb3a4f7f8c516f8622082e69b14d75bbc55ce7 100644 --- a/docs/spark-standalone.md +++ b/docs/spark-standalone.md @@ -10,7 +10,7 @@ In addition to running on the Mesos or YARN cluster managers, Spark also provide # Installing Spark Standalone to a Cluster -To install Spark Standlone mode, you simply place a compiled version of Spark on each node on the cluster. You can obtain pre-built versions of Spark with each release or [build it yourself](index.html#building). +To install Spark Standalone mode, you simply place a compiled version of Spark on each node on the cluster. You can obtain pre-built versions of Spark with each release or [build it yourself](index.html#building). # Starting a Cluster Manually