I'm sending a text file - client-server breakup the text into packets each of 512 bytes but some packets contain text less than max size so on the servers side when receiving each packet I'm calling malloc() to build a string again , is this a bad practice ? is it better to keep a working buffer that can fit for max length and keep iterating , copying and overwriting its values ?
okay @n.m. here is the code , this if is inside a for(;;) loop wakened by the select()
if(nbytes==2) {
packet_size=unpack_short(short_buf);
printf("packet size is %d\n",packet_size);
receive_packet(i,packet_size,&buffer);
printf("packet=%s\n",buffer);
free(buffer);
}
//and here is receive_packet() function
int receive_packet(int fd,int p_len,char **string) {
*string = (char *)malloc(p_len-2); // 2 bytes for saving the length
char *i=*string;
int temp;
int total=0;
int remaining=p_len-2;
while(remaining>0) {
//printf("remaining=%d\n",remaining);
temp = recv(fd,*string,remaining,0);
total+=temp;
remaining=(p_len-2)-total;
(*string) += temp;
}
*string=i;
return 0;
}
In your example, your function already contains a syscall, so the relative cost of malloc
/free
will be virtually unmeasurable. On my system, a malloc
/free
"round trip" averages about 300 cycles, and the cheapest syscalls (getting the current time, pid, etc.) cost at least 2500 cycles. Expect recv
to easily cost 10 times that much, in which case the cost of allocating/freeing memory will be at most about 1% of the total cost of this operation.
Of course exact timings will vary, but the rough orders of magnitude should be fairly invariant across systems. I would not even begin to consider removing malloc
/free
as an optimization except in functions that are purely user-space. Where it's probably actually more valuable to go without dynamic allocation is in operations that should not have failure cases - here the value is that you simplify and harden you code by not having to worry about what to do when malloc
fails.