banner

IPO or No-Go? An AI Agent’s Due Diligence

AI Agents are gaining momentum, but how do they differ from traditional Chatbots? In this article, we don’t just explain the difference – we build an AI agent that can analyze stocks and assess a company’s IPO readiness. Sorry, human analysts, this one’s for the AI-powered stock researchers!

An increasingly popular term in AI is “AI Agent” – but what exactly does it mean? Just like human workers, AI agents analyze tasks, evaluate results, and improve over time. They are software systems powered by AI that can independently execute tasks, assess their output, and refine their results until a defined objective is met. Unlike traditional automation, AI agents can make iterative improvements, adjusting their approach based on feedback or changing inputs.

In this blog, we create an AI agent that can decide the IPO readiness of a company. The architecture of the agent is heavily inspired by the Plan-and-Solve paper as well as the Baby-AGI project. You will learn how to create a “plan-and-execute” style agent and use it for evaluating the IPO readiness of the company, in comparison with competitors based on financial statements.

Let’s take a look at the overall architecture. When a user makes a request, it first goes to a planning component that analyzes the request and breaks it down into a detailed task list. These tasks are then passed to a Single-Task Agent that executes them one at a time in a loop. After each task is completed, the system updates its state with the results, which triggers two actions: responding to the user and evaluating if more tasks are needed through a replanning component. This creates a recursive workflow where the system can continuously adapt and generate new tasks until the user’s request is fully addressed, similar to how a human might break down and solve complex problems.

Image from Langgraph

Let’s first install the packages from Langgraph. We also need to set API keys for OpenAI (the LLM we will use) and Tavily (the search tool we will use)

%%capture --no-stderr
%pip install --quiet -U langgraph langchain-community langchain-openai tavily-python
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")

_set_env("OPENAI_API_KEY")
_set_env("TAVILY_API_KEY")

from langchain_community.tools.tavily_search import TavilySearchResults

tools = [TavilySearchResults(max_results=3)]

Next, we define our execution agent for tasks

from langchain import hub
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent

# Get the prompt to use - you can modify this!
prompt = hub.pull("chets/react-agent-executor")
prompt.pretty_print()

# Choose the LLM that will drive the agent
llm = ChatOpenAI(model="gpt-4-turbo-preview")
agent_executor = create_react_agent(llm, tools, state_modifier=prompt)

To track the agent’s state, we need three key components. We’ll store the current plan as a list of strings. For keeping track of what’s already been done, we’ll use a list of tuples where each tuple pairs a completed step with its corresponding result. Lastly, we’ll maintain both the original user input and the final response as part of the state to provide context and store the outcome.

import operator
from typing import Annotated, List, Tuple
from typing_extensions import TypedDict


class PlanExecute(TypedDict):
input: str
plan: List[str]
past_steps: Annotated[List[Tuple], operator.add]
response: str

Next, lets create the planning step, using function calling to create a plan, that uses pydantic v2 with langchain.

from pydantic import BaseModel, Field


class Plan(BaseModel):
"""Plan to follow in future"""

steps: List[str] = Field(
description="different steps to follow, should be in sorted order"
)
from langchain_core.prompts import ChatPromptTemplate

planner_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"""For the given objective, come up with a simple step by step plan. \
This plan should involve individual tasks, that if executed correctly will yield the correct answer. Do not add any superfluous steps. \
The result of the final step should be the final answer. Make sure that each step has all the information needed - do not skip steps."""
,
),
("placeholder", "{messages}"),
]
)
planner = planner_prompt | ChatOpenAI(
model="gpt-4o", temperature=0
).with_structured_output(Plan)

Let’s look at an example here for invoking the planner:

planner.invoke(
{
"messages": [
("user", "what is the hometown of the current Australia open winner?")
]
}
)

Here’s the output:

Plan(steps=['Identify the current year.', 'Determine the winner of the Australian Open for the current year.', 'Research the hometown of the identified winner.'])

Now let’s create a plan that does re-planning based on the previous step:

from typing import Union


class Response(BaseModel):
"""Response to user."""

response: str


class Act(BaseModel):
"""Action to perform."""

action: Union[Response, Plan] = Field(
description="Action to perform. If you want to respond to user, use Response. "
"If you need to further use tools to get the answer, use Plan."
)


replanner_prompt = ChatPromptTemplate.from_template(
"""For the given objective, come up with a simple step by step plan. \
This plan should involve individual tasks, that if executed correctly will yield the correct answer. Do not add any superfluous steps. \
The result of the final step should be the final answer. Make sure that each step has all the information needed - do not skip steps.

Your objective was this:
{input}

Your original plan was this:
{plan}

You have currently done the follow steps:
{past_steps}

Update your plan accordingly. If no more steps are needed and you can return to the user, then respond with that. Otherwise, fill out the plan. Only add steps to the plan that still NEED to be done. Do not return previously done steps as part of the plan."""

)


replanner = replanner_prompt | ChatOpenAI(
model="gpt-4o", temperature=0
).with_structured_output(Act)

Finally we create the graph and visualize it!

from langgraph.graph import StateGraph, START

workflow = StateGraph(PlanExecute)

# Add the plan node
workflow.add_node("planner", plan_step)

# Add the execution step
workflow.add_node("agent", execute_step)

# Add a replan node
workflow.add_node("replan", replan_step)

workflow.add_edge(START, "planner")

# From plan we go to agent
workflow.add_edge("planner", "agent")

# From agent, we replan
workflow.add_edge("agent", "replan")

workflow.add_conditional_edges(
"replan",
# Next, we pass in the function that will determine which node is called next.
should_end,
["agent", END],
)

# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
app = workflow.compile()
from IPython.display import Image, display

display(Image(app.get_graph(xray=True).draw_mermaid_png()))

Agent Workflow Visualized

Example Use-Case, IPO Readiness

Let’s say we are evaluating a company for IPO readiness. This is a complex task, as it requires taking and piecing together multiple financial metrics.

config = {"recursion_limit": 50}
inputs = {"input": """I need to help me assess my company Toast for IPO readiness and create a report.
IPO Readiness is measured by comparing my company (in the fintech industry) with a prior company (Marqueta) in my industry that have gone through IPO.
The IPO Readiness should measure across "financial metrics" (Revenue, ARR, Growth etc), "Margin metrics" (Gross, Net, Operating, EBITDA etc) and "forward guidance given in each quarter compared to actuals"
Your task is to compare the financial metrics as defined above for Toast and Marqueta over the past year from 2023 to 2024 and give a detailed report like a spreadsheet """
}
ans = []

async for event in app.astream(inputs, config=config):

for k, v in event.items():
ans.append({k:v})
if k != "__end__":
print(k,v)

Here are the results below:

As you can see, there are a bunch of relevant messages, going through all the planning and replanning states.

However, this is not very well summarized. Let’s now make another LLM call, to take the messages above, and summarize them into a spreadsheet for our analysis.

final_planner_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"""You are a friendly assistant""",
),
("placeholder", "{messages}"),
]
)
final_planner = final_planner_prompt | ChatOpenAI(
model="gpt-4o", temperature=0
)
query = """Take the context provided, and write a spreadsheet report about IPO readiness for Toast in comparison with Marqueta.
The IPO Readiness should measure across "financial metrics" (Revenue, ARR, Growth etc), "Margin metrics" (Gross, Net, Operating, EBITDA etc) and "forward guidance given in each quarter compared to actuals"
Your task is to compare the financial metrics as defined above for Toast and Marqueta over the past year from 2023 to 2024 and give a detailed report like a spreadsheet
solely based on the context."""


resp = final_planner.invoke(
{
"messages": [
("user", f"""Query: {query}
context: f{ans}"""
)
]
}
)
print(resp.content)

Here is a part of the new report below. As you can see, it is much easier to read.

Conclusion

In this blog, we show how to build a Plan-and-Execute agent using Langgraph, showcasing its application through a real-world IPO readiness assessment example.

This example demonstrates how agents can break down complex analytical tasks into manageable steps, execute them systematically, and compile the results into a structured format. Whether it’s market analysis, research synthesis, or other multi-step analytical processes, we’ve shown how the Plan-and-Execute pattern provides a robust framework for tackling complex problems in a structured way.

By clicking on “Submit and download”, you agree to receive future insights from EMAlpha.