I have this code as given below for error checking by using hamming codes. I went through the algorithm on Wikipedia and also understood its working as described in the thread How does the Hamming code work?
But the code below uses some kind of sum of the parity bits in order to detect which bit is the error.
Can someone please explain how exactly the sum can be used to detect the error bit?
code:
#include<stdio.h>
#include<math.h>
void main()
{
int i,a[4],c[3],r[7],clk[3],n,sum=0;
printf("Enter data bits\n");
for(i=3;i>=0;i--)
scanf("%d",&a[i]);
printf("\n");
c[0]=(a[0]+a[1]+a[2])%2;
c[1]=(a[1]+a[2]+a[3])%2;
c[2]=(a[1]+a[0]+a[3])%2;
printf("data bits after hamming code is\n");
for(i=3;i>=0;i--)
printf("%d",a[i]);
for(i=2;i>=0;i--)
printf("%d",c[i]);
printf("Enter recieved code\n");
for(i=0;i<7;i++)
scanf("%d",&r[i]);
clk[0]=(r[3]+r[1]+r[2]+r[6])%2;
clk[1]=(r[0]+r[2]+r[1]+r[5])%2;
clk[2]=(r[0]+r[2]+r[3]+r[4])%2;
sum=4*clk[2]+2*clk[1]+1*clk[0];
if(sum==0)
printf("\n u have recived coorrect code\n");
if(sum==1)
{ printf("Error in check bit 2\n");
printf("The correct code is");
r[6]=(r[6]+1)%2;
for(i=0;i<7;i++)
printf("%d",r[i]);
}
if(sum==2)
{
printf("Error in check bit 1\n");
printf("The correct code is");
r[5]=(r[5]+1)%2;
for(i=0;i<7;i++)
printf("%d",r[i]);
}
if(sum==3)
{
printf("\nError in data bit 1");
printf("The correct code is");
r[1]=(r[1]+1)%2;
for(i=0;i<7;i++)
printf("%d",r[i]);
}
if(sum==4)
{
printf("\n Error in chect bit 0");
printf("The correct code is");
r[4]=(r[4]+1)%2;
for(i=0;i<7;i++)
printf("%d",r[i]);
}
if(sum==5)
{
printf("\n Error in data bits 3");
printf("The correct code is");
r[3]=(r[3]+1)%2;
for(i=0;i<7;i++)
printf("%d",r[i]);
}
if(sum==6)
{
printf("Error in data bits 0");
printf("The correct code");
r[0]=(r[0]+1)%2;
for(i=0;i<7;i++);
printf("%d",r[i]);
}
if(sum==7)
{
printf("Error in data bits 2");
printf("The correct code is");
r[2]=(r[2]+1)%2;
for(i=0;i<7;i++)
printf("%d",r[i]);
}
}
Here is an alternative way of thinking about the Hamming code in particular and linear codes in general. One way to encode the Hamming code is to pass the original data through and then to append a checksum to it. That checksum is a linear function of the original data (calculated with mod 2 arithmetic).
When you receive the data you could calculate the checksum from it and add it, mod 2, to the received checksum. If the result is zero then the two checksums were identical and you might as well accept the data.
If the result is not zero, you have some linear function of the transmitted data and the pattern of errors that corrupted it. When you added the two checksums you added together a linear function of the original data and a linear function of (the original data with the error pattern added into it mod 2). Since this is mod 2 addition, when you added the two checksums the two contributions from the original data cancelled each other and you ended up with something which depends only on the error pattern, and not at all on the data encoded. This result is called the syndrome (or at least is equivalent to the syndrome).
Therefore one way to find out what the error pattern is is to compute the syndromes for every possible error pattern (or at least the error patterns you care about) and store their syndromes in a table. For a Hamming code you will typically consider all of the single-bit error patterns. This is syndrome decoding.
So when you receive the data, you compute the syndrome (sum of expected and received checksums). If it is zero, all is well. If it is not, you look it up in the syndrome table and, if it is there, you add the resulting error pattern to the received data to correct the error. If it is not there, you have detected something other than a single-bit error, probably a double-bit error.
One reason for going through this in detail is to show that you could use the same idea (assuming you can create big enough tables) to correct more complicated error patterns, or to correct different choices of errors, if you know that some single bit errors are very unlikely (so don't put them in the table) but some double bit errors are likely (so put them in the table if there is space for them).
For more powerful error codes the number of correctable errors becomes unmanageable and you have to use cleverer ideas that take advantage of the structure of the code.