It's known that CF indicates unsigned carry out and OF indicates signed overflow. So how does an assembly program differentiate between unsigned and signed data since it's only a sequence of bits? (Through additional memory storage for type information, or through positional information or else?) And could these two flags be used interchangeably?
The distinction is in what instructions are used to manipulate the data, not the data itself. Modern computers (since circa 1970) use a representation of integer data called two's-complement in which addition and subtraction work exactly the same on both signed and unsigned numbers.
The difference in representation is the interpretation given to the most significant bit (also called the sign bit). For unsigned numbers the most significant bit is set when the number is in the upper half of the wholly positive range. For signed numbers the most significant bit is set when the number is in the lower and negative half of the whole range.
Different instructions may use different interpretations of the same bit. For example most big machines have both signed and unsigned multiply instructions. Machines with a 'set less than' instruction may have both signed and unsigned flavors.
The OF (overflow flag) tells whether a carry flipped the sign of the most significant bit in the result so that it is different from the most significant bits of the arguments. If numbers are interpreted as unsigned, the overflow flag is irrelevant, but if they are interpreted as signed, OF means, e.g., two large positive numbers were added and the result was negative.
The CF (carry flag) tells whether a bit was carried out of the word entirely (e.g. into bit 33 or bit 65). If numbers are interpreted as unsigned, carry flag means that addition overflowed, and the result is too large to fit in a machine word. The overflow flag is irrelevant.
The answer to your question is that assembly code has several ways of distinguishing signed from unsigned data: