Does Inference API support batch/asynchronous processing

Hi I have a bunch of inputs that I’d like to process with the inference API. Does lambda labs endpoint support the new batch processing that’s listed in the OpenAI API docs: https://platform.openai.com/docs/guides/batch?lang=python

I’d like to use the chat completion endpoint, and basically submit a list of independent prompts/messages, and receive the outputs for each run independently through the model.