I know BCD is like more intuitive datatype if you don't know binary. But I don't know why to use this encoding, its like don't makes a lot of sense since its waste representation in 4bits (when representation is bigger than 9).
Also I think x86 only supports adds and subs directly (you can convert them via FPU).
Its possible that this comes from old machines, or other architectures?
BCD arithmetic is useful for exact decimal calculations, which is often a requirement for financial applications, accountancy, etc. It also makes things like multiplying/dividing by powers of 10 easier. These days there are better alternatives.
There's a good Wikipedia article which discusses the pro and cons.