We are planning to develop an Azure function for which the input trigger is a service bus message and the output will be blob storage. The service bus message will contain a image url and the function will resize the image to a predefined resolution and will upload to azure blob storage.
The resolution to which the image should be resized is stored in the database and the Azure function needs to make a call to database to get to know the resolution that is supposed to be used for the image in the input message. The resolution would actually be a master data configured based on the source of the input message.
Making a database call would be a expensive call as it would have to go to the database for each call. Is there any way to cache the data and use it without calling the database. Like in memory caching?
You are free to use the usual approaches that you would use in other .NET applications:
You can cache it in memory. The easiest way is just to declare a static dictionary and put database values inside (use concurrent dictionary if needed). The cached values will be reused for all subsequent Function executions which run on the same instance. If an instance gets idle for 5 minutes, or if App scales out to an extra instance, you will have to read the database again;
You can use distributed cache, e.g. Redis, by using its SDK from Function code. Might be a bit nicer, since you keep the stateless nature of Functions, but might cost a bit more. Table Storage is a viable alternative to Redis, but with more limited API.
There's no "caching" feature of Azure Functions themselves, that would be ready to use without any extra code.