At what situation I can use Dask instead of Apache Spark?

Hariprasad picture Hariprasad · Aug 10, 2016 · Viewed 30.9k times · Source

I am currently using Pandas and Spark for data analysis. I found Dask provides parallelized NumPy array and Pandas DataFrame.

Pandas is easy and intuitive for doing data analysis in Python. But I find difficulty in handling multiple bigger dataframes in Pandas due to limited system memory.

Simple Answer:

Apache Spark is an all-inclusive framework combining distributed computing, SQL queries, machine learning, and more that runs on the JVM and is commonly co-deployed with other Big Data frameworks like Hadoop. ... Generally Dask is smaller and lighter weight than Spark.

I get to know below details from http://dask.pydata.org/en/latest/spark.html

  • Dask is light weighted
  • Dask is typically used on a single machine, but also runs well on a distributed cluster.
  • Dask to provides parallel arrays, dataframes, machine learning, and custom algorithms
  • Dask has an advantage for Python users because it is itself a Python library, so serialization and debugging when things go wrong happens more smoothly.
  • Dask gives up high-level understanding to allow users to express more complex parallel algorithms.
  • Dask is lighter weight and is easier to integrate into existing code and hardware.
  • If you want a single project that does everything and you’re already on Big Data hardware then Spark is a safe bet
  • Spark is typically used on small to medium sized cluster but also runs well on a single machine.

I understand more things about Dask from the below link https://www.continuum.io/blog/developer-blog/high-performance-hadoop-anaconda-and-dask-your-cluster

  • If you’re running into memory issues, storage limitations, or CPU boundaries on a single machine when using Pandas, NumPy, or other computations with Python, Dask can help you scale up on all of the cores on a single machine, or scale out on all of the cores and memory across your cluster.
  • Dask works well on a single machine to make use of all of the cores on your laptop and process larger-than-memory data
  • scales up resiliently and elastically on clusters with hundreds of nodes.
  • Dask works natively from Python with data in different formats and storage systems, including the Hadoop Distributed File System (HDFS) and Amazon S3. Anaconda and Dask can work with your existing enterprise Hadoop distribution, including Cloudera CDH and Hortonworks HDP.

http://dask.pydata.org/en/latest/dataframe-overview.html

Limitations

Dask.DataFrame does not implement the entire Pandas interface. Users expecting this will be disappointed.Notably, dask.dataframe has the following limitations:

  1. Setting a new index from an unsorted column is expensive
  2. Many operations, like groupby-apply and join on unsorted columns require setting the index, which as mentioned above, is expensive
  3. The Pandas API is very large. Dask.dataframe does not attempt to implement many pandas features or any of the more exotic data structures like NDFrames

Thanks to the Dask developers. It seems like very promising technology.

Overall I can understand Dask is simpler to use than spark. Dask is as flexible as Pandas with more power to compute with more cpu's parallely.

I understand all the above facts about Dask.

So, roughly how much amount of data(in terabyte) can be processed with Dask?

Answer

MaxU picture MaxU · Aug 10, 2016

you may want to read Dask comparison to Apache Spark

Apache Spark is an all-inclusive framework combining distributed computing, SQL queries, machine learning, and more that runs on the JVM and is commonly co-deployed with other Big Data frameworks like Hadoop. It was originally optimized for bulk data ingest and querying common in data engineering and business analytics but has since broadened out. Spark is typically used on small to medium sized cluster but also runs well on a single machine.

Dask is a parallel programming library that combines with the Numeric Python ecosystem to provide parallel arrays, dataframes, machine learning, and custom algorithms. It is based on Python and the foundational C/Fortran stack. Dask was originally designed to complement other libraries with parallelism, particular for numeric computing and advanced analytics, but has since broadened out. Dask is typically used on a single machine, but also runs well on a distributed cluster.

Generally Dask is smaller and lighter weight than Spark. This means that it has fewer features and instead is intended to be used in conjunction with other libraries, particularly those in the numeric Python ecosystem.