Skip to content
Snippets Groups Projects
Commit 293a0af5 authored by Aaron Davidson's avatar Aaron Davidson
Browse files

In experimental clusters we've observed that a 10 second timeout was insufficient,

despite having a low number of nodes and relatively small workload (16 nodes, <1.5 TB data).
This would cause an entire job to fail at the beginning of the reduce phase.
There is no particular reason for this value to be small as a timeout should only occur
in an exceptional situation.

Also centralized the reading of spark.akka.askTimeout to AkkaUtils (surely this can later
be cleaned up to use Typesafe).

Finally, deleted some lurking implicits. If anyone can think of a reason they should still
be there, please let me know.
parent c64a53a4
No related branches found
No related tags found
No related merge requests found
Showing
with 48 additions and 71 deletions
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment