馃憪 Create agent tools
The easiest way to define custom tools for agents in LangChain is using the @tool
decorator. The decorator makes tools out of functions by using the function name as the tool name by default, and the function's docstring as the tool's description.
We want the AI research agent to have access to the following tools:
-
get_paper_metadata_arxiv
: Fetches a list of research papers from Arxiv, given a topic -
get_paper_summary_from_arxiv
: Fetches the summary for a single research paper, given a paper ID -
answer_questions_about_topics
: Answer questions about a given topic based on information in the agent's knowledge base
Fill in any <CODE_BLOCK_N>
placeholders and run the cells under the Step 6: Create agent tools section in the notebook to create tools for the agent to use.
The get_paper_metadata_arxiv
tool has been defined for you. Use this as inspiration to create the other two tools. The tool names and docstrings have been written out for you. All you have to do is code up the tool logic.
Similarly, use the sample test for the get_paper_metadata_arxiv
tool to test out the other tools.
The answers for code blocks in this section are as follows:
CODE_BLOCK_9
Answer
doc = ArxivLoader(query=id, load_max_docs=1).get_summaries_as_docs()
if len(doc) == 0:
return "No summary found for this paper."
return doc[0].page_content
CODE_BLOCK_10
Answer
MongoDBAtlasVectorSearch.from_connection_string(
connection_string=MONGODB_URI,
namespace=DB_NAME + "." + COLLECTION_NAME,
embedding=embedding_model,
index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,
text_key="abstract",
)
CODE_BLOCK_11
Answer
context = vector_store.similarity_search_with_score(query=query)
context = [doc for doc, score in context if score > 0.8]
return context
CODE_BLOCK_12
Answer
retrieve = {
"context": RunnableLambda(get_context)
| (lambda docs: "\n\n".join([d.page_content for d in docs])),
"question": RunnablePassthrough(),
}
# Defining the chat prompt
template = """Answer the question based only on the following context. IF NO CONTEXT IS PROVIDED, SAY I DO NOT KNOW: \
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# Parse output as a string
parse_output = StrOutputParser()
# Retrieval chain
retrieval_chain = retrieve | prompt | llm | parse_output
answer = retrieval_chain.invoke(query)
return answer
CODE_BLOCK_13
Answer
get_paper_summary_from_arxiv.invoke("1808.09236")
CODE_BLOCK_14
Answer
get_paper_summary_from_arxiv.invoke("808.09236")
CODE_BLOCK_15
Answer
answer_questions_about_topics.invoke("Partial cubes")
CODE_BLOCK_16
Answer
answer_questions_about_topics.invoke("Tree of Thoughts")