There's a lot of interest these days in Erlang as a language for writing parallel programs on multicore. I've heard people argue that Erlang's message-passing model is easier to program than the dominant shared-memory models such as threads.
Conversely, in the high-performance computing community the dominant parallel programming model has been MPI, which also implements a message-passing model. But in the HPC world, this message-passing model is generally considered very difficult to program in, and people argue that shared memory models such as OpenMP or UPC are easier to program in.
Does anybody know why there is such a difference in the perception of message-passing vs. shared memory in the IT and HPC worlds? Is it due to some fundamental difference in how Erlang and MPI implement message passing that makes Erlang-style message-passing much easier than MPI? Or is there some other reason?
I agree with all previous answers, but I think a key point that is not made totally clear is that one reason that MPI might be considered hard and Erlang easy is the match of model to the domain.
Erlang is based on a concept of local memory, asynchronous message passing, and shared state solved by using some form of global database that all threads can get to. It is designed for applications that do not move a whole lot of data around, and that is not supposed to explode out to a 100k separate nodes that need coordination.
MPI is based on local memory and message passing, and is intended for problems where moving data around is a key part of the domain. High-performance computing is very much about taking the dataset for a problem, and splitting it up among a host of compute resources. And that is pretty hard work in a message-passing system as data has to be explicitly distributed with balancing in mind. Essentially, MPI can be viewed as a grudging admittance that shared memory does not scale. And it is targeting high-performance computation spread across 100k processors or more.
Erlang is not trying to achieve the highest possible performance, rather to decompose a naturally parallel problem into its natural threads. It was designed with a totally different type of programming tasks in mind compared to MPI.
So Erlang is best compared to pthreads and other rather local heterogeneous thread solutions, rather than MPI which is really aimed at a very different (and to some extent inherently harder) problem set.