SELECT Value1 INTO lValue
FROM Table1
WHERE Field1 = lTempValue;
This works fine when the match is true. But if the match isn't true, I receive an error.
ORA-01403: no data found
Ideally, that's fine with me because I'm going to check that value next to see if it's above 0 and if it is, use that value in an insert query. I don't want to check for the value and then have to run the same query to retrieve it essentially, I want to do it in one query if possible, but I can't figure out how that is done.
If there's a value, then I want that value to go into lValue. If there is no value, then I want 0 to go into lValue. Anyone got any ideas? I've only done a quick google check, but it came up dry. Figured I'd post this while looking. Thanks for the help.
Normally, you'd simply catch the exception
BEGIN
SELECT value1
INTO lValue
FROM table1
WHERE field1 = lTempValue;
EXCEPTION
WHEN no_data_found
THEN
lvalue := 0;
END;
You can write less code by using NVL
and an aggregate function (either MIN
or MAX
) but that tends to be a bit less obvious (note, for example, that those answers had to get revised a couple of times). And it requires whoever comes after you to pause for a moment to understand what you are doing (and whether you are doing it correctly or not). A simple nested PL/SQL block is pretty common and pretty self-explanatory.
More than that, however, it doesn't hide bugs due to duplicate rows. If you happen to get two rows in table1
where field1
is lTempValue
, catching just the no_data_found
exception allows the unexpected too_many_rows
exception to propagate up to the caller. Since you don't expect to have multiple rows, that is exactly the behavior that you want. Using aggregate functions hides the fact that the underlying data has problems causing you to return potentially incorrect results and making it impossible to detect that there is a problem. I would always rather get an error as soon as something is causing duplicate rows to appear-- allowing me to fix the problem before it gets out of hand-- rather than finding out years later that we've got millions of duplicate rows, that the code has been occasionally returning incorrect results, and that we have a huge data cleansing effort after addressing the root cause.