I'm working with eclipse IDE (Version: 3.4.2) on a mac and I have met the following issue.
When comparing between strings using equal() or equalsIgnoreCase() methods I receive false even when the string are equal. For example, the code below consider the following condition as false, even when values[0] = "debug_mode"
if (values[0].equalsIgnoreCase("debug_mode"))
debug_mode = true;
which is part of the following loop:
String value = dis.readLine();
String values[] = value.trim().split("=");
if (values.length >= 2)
{
Config.prnt_dbg_msg(values[0] + "\t" + values[1]);
if (values[0].equalsIgnoreCase("debug_mode"))
debug_mode = isTrue(values[1]);
if (values[0].equalsIgnoreCase("debug_query_parsing"))
debug_query_parsing = isTrue(values[1]);
if (values[0].equalsIgnoreCase("username"))
Connection_Manager.alterAccessParameters(values[1], null, null);
if (values[0].equalsIgnoreCase("password"))
Connection_Manager.alterAccessParameters(null, values[1], null);
if (values[0].equalsIgnoreCase("database"))
Connection_Manager.alterAccessParameters(null, null, values[1]);
if (values[0].equalsIgnoreCase("allow_duplicate_entries"))
allow_duplicate_entries = isTrue(values[1]);
}
I tried to use value[0].equal("debug_mode")
and got the same result.
Does someone have any idea why?
That would be very strange indeed :) Can you change the above code to this:
if ("debug_mode".equalsIgnoreCase("debug_mode"))
debug_mode = true;
confirm it works fine and then double check why your values[0]
is not "debug_mode".
Here's what comes to my mind right now as a list of things to check:
values[0].length() == "debug_mode".length()
.equals()
between that character and the respective character of the "debug_mode" string?To clarify, the problem is actually using DataInputStream.readLine
. From javadoc (http://download.oracle.com/javase/1.6.0/docs/api/java/io/DataInputStream.html):
readLine()
Deprecated. This method does not properly convert bytes to characters. ...
It actually has to do with Unicode in a subtle way - when you do writeChar
you actually write two bytes 0
and 97
, big-endian Unicode for the letter a
.
Here's a self-contained snippet that shows the behavior:
import java.io.*;
import java.util.*;
public class B {
public static void main(String[] args) throws Exception {
String os = "abc";
System.out.println("---- unicode, big-endian");
for(byte b: os.getBytes("UTF-16BE")) {
System.out.println(b);
}
ByteArrayOutputStream baos = new ByteArrayOutputStream();
DataOutputStream dos = new DataOutputStream(baos);
for(char c: os.toCharArray()) {
dos.writeChar(c);
}
byte[] ba = baos.toByteArray();
System.out.println("---- ba");
for(byte b: ba) {
System.out.println(b);
}
ByteArrayInputStream bais = new ByteArrayInputStream(ba);
DataInputStream dis = new DataInputStream(bais);
System.out.println("---- dis");
String s = dis.readLine();
System.out.println(s);
System.out.println("String length is " + s.length()
+ ", but you would expect " + os.length()
+ ", as that is what you see printed...");
}
}
Moral of the story - don't use deprecated api... Also, whitespace is the silent killer: http://www.codinghorror.com/blog/2009/11/whitespace-the-silent-killer.html