Question Rephrasing

In this challenge, you must modify the initGenerateAnswerChain() function in modules/agent/chains/rephrase-question.chain.ts to add a chain that rephrases an input into a standalone question.

The chain will accept the following input:

typescript
Chain Input
export type RephraseQuestionInput = {
  // The user's question
  input: string;
  // Conversation history of {input, output} from the database
  history: ChatbotResponse[];
};

The output of the chain will be a string.

To convert the message history from an array of objects to a string in the following format:

Human: {input}
AI: {output}

You will need to update the initRephraseChain method to:

  1. Pass the history and input to a PromptTemplate containing the prompt in prompts/rephrase-question.txt.

  2. Pass the formatted prompt to the LLM

  3. Parse the output to a string

Open rephrase-question.chain.ts

Create a Prompt Template

Use the PromptTemplate.fromTemplate() static method to create a new prompt template containing the following prompt.

Rephrase Question Prompt
Given the following conversation and a question,
rephrase the follow-up question to be a standalone question about the
subject of the conversation history.

If you do not have the required information required to construct
a standalone question, ask for clarification.

Always include the subject of the history in the question.

History:
{history}

Question:
{input}

Your code should resemble the following:

typescript
Prompt Template
// Prompt template
const rephraseQuestionChainPrompt = PromptTemplate.fromTemplate<
  RephraseQuestionInput,
  string
>(`
  Given the following conversation and a question,
  rephrase the follow-up question to be a standalone question about the
  subject of the conversation history.

  If you do not have the required information required to construct
  a standalone question, ask for clarification.

  Always include the subject of the history in the question.

  History:
  {history}

  Question:
  {input}
`);

Runnable Sequence

Next, use the RunnableSequence.from() static method to create a new chain that takes the RephraseQuestionInput and outputs a string.

The RunnableSequence will need to:

  1. Convert message history to a string

  2. Use the input and formatted history to format the prompt

  3. Pass the formatted prompt to the LLM

  4. Coerce the output into a string

Use the return keyword to return the sequence from the function.

typescript
Full Sequence
return RunnableSequence.from<RephraseQuestionInput, string>([
  // <1> Convert message history to a string
  RunnablePassthrough.assign({
    history: ({ history }): string => {
      if (history.length == 0) {
        return "No history";
      }
      return history
        .map(
          (response: ChatbotResponse) =>
            `Human: ${response.input}\nAI: ${response.output}`
        )
        .join("\n");
    },
  }),
  // <2> Use the input and formatted history to format the prompt
  rephraseQuestionChainPrompt,
  // <3> Pass the formatted prompt to the LLM
  llm,
  // <4> Coerce the output into a string
  new StringOutputParser(),
]);

Convert Conversation History to a string

The RunnablePassthrough.assign() static method is another method for modifying individual keys in a chain.

Here, the messages input is an array of (:Response) nodes from the database. Prompt templates expect placeholders to be a string, so you must convert the array into a string.

In the following code, the .map() method uses the input and output properties on each response to a format the LLM will understand before the .join() method joins them into a single string.

typescript
Reformatting Messages
RunnablePassthrough.assign({
  history: ({ history }): string => {
    if (history.length == 0) {
      return "No history";
    }
    return history
      .map(
        (response: ChatbotResponse) =>
          `Human: ${response.input}\nAI: ${response.output}`
      )
      .join("\n");
  },
}),

Using the Chain

Later in the course, you will update the application to use the chain. You could initialize and run the chain with the following code:

typescript
const llm = new OpenAI() // Or the LLM of your choice
const rephraseAnswerChain = initRephraseChain(llm)

const output = await rephraseAnswerChain.invoke({
  input: 'What else did they act in?',
  history: [{
    input: 'Who played Woody in Toy Story?',
    output: 'Tom Hanks played Woody in Toy Story',
  }]
}) // Other than Toy Story, what movies has Tom Hanks acted in?

Testing your changes

If you have followed the instructions, you should be able to run the following unit test to verify the response using the npm run test command.

sh
Running the Test
npm run test rephrase-question.chain.test.ts
View Unit Test
typescript
rephrase-question.chain.test.ts
import { config } from "dotenv";
import { BaseChatModel } from "langchain/chat_models/base";
import { RunnableSequence } from "@langchain/core/runnables";
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import initRephraseChain, {
  RephraseQuestionInput,
} from "./rephrase-question.chain";
import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages";
import { ChatbotResponse } from "../history";

describe("Rephrase Question Chain", () => {
  let llm: BaseChatModel;
  let chain: RunnableSequence;
  let evalChain: RunnableSequence<any, any>;

  beforeAll(async () => {
    config({ path: ".env.local" });

    llm = new ChatOpenAI({
      openAIApiKey: process.env.OPENAI_API_KEY,
      modelName: "gpt-3.5-turbo",
      temperature: 0,
      configuration: {
        baseURL: process.env.OPENAI_API_BASE,
      },
    });

    chain = await initRephraseChain(llm);

    evalChain = RunnableSequence.from([
      PromptTemplate.fromTemplate(`
        Is the rephrased version a complete standalone question that can be answered by an LLM?

        Original: {input}
        Rephrased: {response}

        If the question is a suitable standalone question, respond "yes".
        If not, respond with "no".
        If the rephrased question asks for more information, respond with "missing".
      `),
      llm,
      new StringOutputParser(),
    ]);
  });

  describe("Rephrasing Questions", () => {
    it("should handle a question with no history", async () => {
      const input = "Who directed the matrix?";

      const response = await chain.invoke({
        input,
        history: [],
      });

      const evaluation = await evalChain.invoke({ input, response });
      expect(`${evaluation.toLowerCase()} - ${response}`).toContain("yes");
    });

    it("should rephrase a question based on its history", async () => {
      const history = [
        {
          input: "Can you recommend me a film?",
          output: "Sure, I recommend The Matrix",
        },
      ];
      const input = "Who directed it?";
      const response = await chain.invoke({
        input,
        history,
      });

      expect(response).toContain("The Matrix");

      const evaluation = await evalChain.invoke({ input, response });
      expect(`${evaluation.toLowerCase()} - ${response}`).toContain("yes");
    });

    it("should ask for clarification if a question does not make sense", async () => {
      const input = "What about last week?";
      const history: ChatbotResponse[] = [];

      const response = await chain.invoke({
        input,
        history,
      });

      const evaluation = await evalChain.invoke({ input, response });
      expect(`${evaluation.toLowerCase()} - ${response}`).toContain("provide");
    });
  });
});

It works!

Once you have received a rephrased question from the LLM, click the button below to mark the challenge as completed.

Summary

In this lesson, you built a chain that will take this history to rephrase the user’s input into a standalone question.

In the next module, you will build a chain that uses a retriever to query a vector store for documents that are similar to an input.