LCEL

What is LCEL?

LangChain Expression Language (LCEL) is a declarative way to compose chains together easily.
LCEL supports putting prototypes in production, with no code changes.
LCEL can be used from the simplest “prompt + LLM” chain to complex hundreds of chains [1].

What is a Chain?

Imagine you have a set of different toys, each with its unique function. A Chain is like connecting these toys in a specific order to create a fun game. Each toy does its part, and together, they create something more complex and exciting than any single toy could do on its own.
For example, a common type of Chain in LangChain is the LLM Chain, which takes an input/prompt, passes it to an LLM for processing and provides output.

Why do we need LCEL?

LCEL makes it easy to build complex chains from basic components. It does this by providing:

  1. A unified interface
    • Every LCEL object implements the Runnable interface.
    • This defines a common set of invocation methods (invokebatchstream, …).
    • These methods make it possible for chains of LCEL objects to also automatically support these invocations i.e. every chain of LCEL objects is itself an LCEL object.
      2. Composition primitives
    • LCEL provides a number of primitives that make it easy to compose chains, parallelize components, add fallbacks, dynamically configure chain internal, and more.

To better understand the value of LCEL, it’s helpful to see it in action and think about how we might recreate similar functionality without it [2].

Without LCEL

from typing import List

import openai


prompt_template = "Tell me a short joke about {topic}"
client = openai.OpenAI()

def call_chat_model(messages: List[dict]) -> str:
    response = client.chat.completions.create(
        model="gpt-3.5-turbo", 
        messages=messages,
    )
    return response.choices[0].message.content

def invoke_chain(topic: str) -> str:
    prompt_value = prompt_template.format(topic=topic)
    messages = [{"role": "user", "content": prompt_value}]
    return call_chat_model(messages)

invoke_chain("programming")

With LCEL

from langchain_core.runnables import RunnablePassthrough


prompt = ChatPromptTemplate.from_template(
    "Tell me a short joke about {topic}"
)
model = ChatOpenAI(model="gpt-3.5-turbo")
chain = (
    {"topic": RunnablePassthrough()} 
    | prompt
    | model
    | StrOutputParser()
)

chain.invoke("programming")

Use cases of LCEL

LCEL can be used at all the places where generic Chains are used. The LangChain Expression Language (LCEL) is used in a variety of use cases to enhance the capabilities of Large Language Models (LLMs) and to streamline complex tasks. Here are a few of the use cases for LCEL:

  1. SQL Operations: LCEL can be used to interact with SQL databases, allowing for the execution of SQL queries and operations using natural language prompts[3].

  2. Chatbots: LCEL facilitates the creation of chatbots that can handle complex interactions by chaining together different models and prompts[3:1].

  3. Retrieval-Augmented Generation (RAG): It can be used for RAG tasks, which involve integrating LLMs with data search capabilities to provide nuanced AI responses[3:2][4].

  4. Question Answering and Chat over Documents: LCEL is useful for building systems that can answer questions or engage in chat based on the content of documents[3:3].

  5. Tool Use: It can be employed to create chains that interact with various tools, such as APIs or tabular data, for tasks like data extraction and summarization[3:4].

  6. Agent Simulations: LCEL can be used to simulate agents that perform specific tasks autonomously, enhancing the interactivity and functionality of applications[3:5].

  7. Autonomous Agents: It supports the development of autonomous agents that can perform tasks without direct human intervention, relying on the chaining of LLMs to process and act on information[3:6].

  8. Extraction: LCEL can be used to extract structured information from text, which is essential for working with APIs and databases that deal with structured data[3:7].

  9. Interacting with APIs: It is particularly useful for creating chains that can interact with APIs to retrieve or manipulate data[3:8].

  10. Tabular Question Answering: LCEL can be leveraged for querying and interpreting data stored in tables, such as CSVs, Excel sheets, or SQL tables[3:9].

  11. Summarization: It can be used to summarize long documents, making it easier to digest large amounts of text[3:10].

These use cases demonstrate the versatility of LCEL in handling a wide range of tasks that involve the use of LLMs, from data manipulation to interactive applications. The ability to chain together different components allows for the creation of sophisticated systems that can perform complex operations with ease[3:11].

What are the Pros and Cons of LCEL?

Below, we explore the pros and cons of using LCEL, drawing insights from the provided search results.

Pros

  1. Declarative Approach: LCEL enables a declarative style for chain composition, which simplifies operations like streaming, batch processing, and handling asynchronous tasks[5]. This approach allows developers to focus on what they want to achieve rather than how to implement it, leading to clearer and more maintainable code.

  2. Modularity and Flexibility: The modular nature of LCEL facilitates easy swapping of components, making it adaptable to various use cases. This flexibility is further enhanced by the ability to easily modify prompts to suit specific requirements[5:1].

  3. Support for Advanced Features: LCEL comes with strong support for advanced features such as streaming, parallel execution, and asynchronous operations. These features enable superfast development of chains and efficient handling of complex tasks[6][7].

  4. Integration with LangChain Tools: LCEL integrates seamlessly with other LangChain tools like LangSmith and LangServe, providing a cohesive environment for developing sophisticated language models and applications[6:1].

Cons

  1. Learning Curve: The syntax of LCEL, while powerful, can be confusing and against the Zen of Python for some developers. Learning this new or uncommon syntax requires additional effort, which might be a barrier for those not familiar with similar programming paradigms[6:2][7:1].

  2. Added Abstraction Layer: LCEL introduces an additional layer of abstraction on top of Python, which might not be appealing to all developers. This abstraction can sometimes obscure the underlying logic, making debugging and optimization more challenging[7:2].

  3. Potential for Increased Complexity: While LCEL is designed to simplify chain composition, the very flexibility and modularity it offers can lead to increased complexity in chain management, especially for large and intricate chains[5:2].

  4. Dependency on LangChain Ecosystem: The effectiveness and efficiency of LCEL are closely tied to the LangChain ecosystem. Developers not fully invested in this ecosystem might find it less beneficial compared to more general-purpose programming approaches[5:3][6:3].

Alternatives to LCEL

If you are invested into LangChain then there is no alternative to LCEL. Below are alternatives to LangChain.

  1. FlowiseAI: FlowiseAI provides a drag-and-drop user interface for constructing flows with Large Language Models (LLMs) and developing LangChain applications. It is particularly suitable for developers who want to build large language models and is also aimed at organizations that require such capabilities[8][9].

  2. LlamaIndex: LlamaIndex is another alternative that is ideal for building focused search experiences with minimal complexity. It is better suited for creating complex and interactive applications that require less intricate chaining of tools compared to LangChain[8:1][10].

  3. Auto-GPT: Auto-GPT aims to transform GPT-4 into a self-sufficient conversational AI, with agent deployment as a secondary goal. It contrasts with LangChain, which is more of a toolbox for developing specialized applications[11].

These alternatives offer different approaches and features that might be more aligned with specific project requirements or personal preferences. They can be considered by developers looking for simpler solutions or different functionalities than those provided by LCEL within the LangChain framework[8:2][9:1][11:1][10:1].


  1. https://python.langchain.com/docs/expression_language/ ↩︎

  2. https://python.langchain.com/docs/expression_language/why ↩︎

  3. https://js.langchain.com/docs/use_cases ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

  4. https://www.datacamp.com/tutorial/prompt-engineering-with-langchain ↩︎

  5. https://www.comet.com/site/blog/lcel-a-guide-to-langchain-expression-language/ ↩︎ ↩︎ ↩︎ ↩︎

  6. https://www.pinecone.io/learn/series/langchain/langchain-expression-language/ ↩︎ ↩︎ ↩︎ ↩︎

  7. https://eightify.app/summary/artificial-intelligence-and-programming/understanding-langchain-expression-language-lcel ↩︎ ↩︎ ↩︎

  8. https://blog.apify.com/langchain-alternatives/ ↩︎ ↩︎ ↩︎

  9. https://www.e2enetworks.com/blog/top-5-open-source-langchain-alternatives-to-use-in-2024 ↩︎ ↩︎

  10. https://stackoverflow.com/questions/76990736/differences-between-langchain-llamaindex ↩︎ ↩︎

  11. https://indiaai.gov.in/article/seven-interesting-langchain-alternatives-for-building-ai-agents ↩︎ ↩︎

Thoughts 🤔 by Soumendra Kumar Sahoo is licensed under CC BY 4.0