Previous | Next --- Slide 8 of 51
Back to Lecture Thumbnails
ggomezm

Is there a way to force Spark to write data to disk? I'm wondering if we are doing a really long RDD operation it may be beneficial to add a checkpoint in the middle of computation so that if something were to go wrong we don't have to start from the beginning.

sareyan

Seems like the footnote on the slide may cover optional writing to disk?

In the standard (in-memory) case I don’t get why we are describing Spark as “fault tolerant” if it’s only fault tolerant in the sense that partitions are independent of each other — I don’t see how that is a feature over Hadoop.

Please log in to leave a comment.