Elevate Your Interview Preparation with the AI Interviewer App

Elevate Your Interview Preparation with the AI Interviewer App

In today’s competitive job market, interview preparation is key to landing your dream job. To streamline this process, the AI Interviewer app offers an innovative solution by harnessing the power of artificial intelligence. Developed by Lyzr, this Streamlit-based application provides a seamless platform for extracting skills from your resume, generating tailored interview questions, and even providing reference answers. Let’s dive into how the AI Interviewer app works and how you can leverage it to enhance your interview readiness.


Welcome to Lyzr! | Lyzr Documentation

Explore the limitless possibilities of Generative AI with Lyzr, an enterprise alternative to popular Generative AI SaaS products. Lyzr offers a robust and secure solution for building and launching your own enterprise-grade Generative AI applications with speed and confidence.

docs.lyzr.ai

The AI Interviewer app is designed to revolutionize interview preparation by leveraging AI capabilities. Powered by Lyzr’s agent-centric approach, this application simplifies the process of generating interview questions and answers with minimal code and time investment.

Setting up the Project

Getting started with the AI Interviewer app is straightforward. Follow these steps to set up the project:

Clone the App: Clone the AI Interviewer app repository from GitHub.

git clone https://github.com/PrajjwalLyzr/AI-Interviewer

Create a Virtual Environment: Set up a virtual environment and activate it.

python3 -m venv venv
source venv/bin/activate

Set Environment Variables: Create a .env file and add your OpenAI API key.

OPENAI_API_KEY = Paste your openai api key here

Install Dependencies: Install the required dependencies.

pip install lyzr streamlit

Core Components of the AI Interviewer App

Let’s explore the key components of the AI Interviewer app:

Utils Module for Common Functions

The utils.py file in the project serves as a utility module containing common functions utilized throughout the application. It includes functions for removing existing files, retrieving files in a directory, and saving uploaded files.

import os
import shutil
from typing import Optional, Literal
import streamlit as st
from dotenv import load_dotenv; load_dotenv()

def remove_existing_files(directory):
for filename in os.listdir(directory):
file_path = os.path.join(directory, filename)
try:
if os.path.isfile(file_path) or os.path.islink(file_path):
os.unlink(file_path)
elif os.path.isdir(file_path):
shutil.rmtree(file_path)
except Exception as e:
st.error(fError while removing existing files: {e})

def get_files_in_directory(directory):
# This function help us to get the file path along with filename.
files_list = []

if os.path.exists(directory) and os.path.isdir(directory):
for filename in os.listdir(directory):
file_path = os.path.join(directory, filename)

if os.path.isfile(file_path):
files_list.append(file_path)

return files_list

def save_uploaded_file(uploaded_file, directory_name):
# Function to save uploaded file
remove_existing_files(directory_name)

file_path = os.path.join(directory_name, uploaded_file.name)
with open(file_path, wb) as file:
file.write(uploaded_file.read())
st.success(File uploaded successfully)

Interface for Large Language Model (LLM) Calling

The llm_calling function serves as an interface for interacting with large language models (LLMs) provided by OpenAI. It facilitates text generation based on given prompts using specified parameters.

def llm_calling(
user_prompt:str,
system_prompt: Optional[str] = You are a Large Language Model. You answer questions,
llm_model: Optional[Literal[gpt-4-turbo-preview, gpt-4]] = gpt-4-turbo-preview,
temperature: Optional[float] = 1, # 0 to 2
max_tokens: Optional[int] = 4095, # 1 to 4095
top_p: Optional[float] = 1, # 0 to 1
frequency_penalty: Optional[float] = 0, # 0 to 2
presence_penalty: Optional[float] = 0 # 0 to 2
) -> str:
if not (1 <= max_tokens <= 4095):
raise ValueError(`max_tokens` must be between 1 and 4095, inclusive.)

from openai import OpenAI
client = OpenAI(api_key=os.getenv(OPENAI_API_KEY))

response = client.chat.completions.create(
model=llm_model,
messages=[
{
role: system,
content: f{system_prompt}
},
{
role: user,
content: f{user_prompt}
}
],
temperature=temperature,
max_tokens=max_tokens,
top_p=top_p,
frequency_penalty=frequency_penalty,
presence_penalty=presence_penalty
)

return response.choices[0].message.content

Kernel for the AI Interviewer Application

The ai_interviewer function serves as the core of the AI Interviewer app. It utilizes the Lyzr library’s QABot module to extract skills from either a PDF or a DOCX file and generate interview questions and answers accordingly.

from lyzr import QABot
from pathlib import Path
import os
from dotenv import load_dotenv

load_dotenv()

os.environ[OPENAI_API_KEY] = os.getenv(OPENAI_API_KEY)

def ai_interviewer(path, file):
if file == .pdf:
interviewer_pdf = QABot.pdf_qa(
input_files=[Path(path)]
)

return interviewer_pdf

if file == .docx:
interviewer_doc = QABot.docx_qa(

input_files=[Path(path)]
)

return interviewer_doc

Entry Point for the Application (app.py)

The app.py file defines the main functionality of the AI Interviewer app using Streamlit. It allows users to upload their resume in PDF or DOCX format, extracts skills, generates interview questions, and displays reference answers.

import streamlit as st
import os
from pathlib import Path
from PIL import Image
from utils import utils
from lyzr_qabot import ai_interviewer

data = data
os.makedirs(data, exist_ok=True)

def interviwer(path, filetype):
interview_agent = ai_interviewer(path=path, file=filetype)
metric = Extract all the skills from the given file
skills = interview_agent.query(metric)

return skills.response

def gpt_interview_questions(qabot_response):
question_response = utils.llm_calling(user_prompt=fCreate one interview question on based on this skill set {qabot_response}, [!important] question should not more than 2 line make it very specific,
system_prompt=fYou are an interview expert,llm_model=gpt-4-turbo-preview)

answer_response = utils.llm_calling(user_prompt=fGenerate answer for this {question_response}, [!Important] answer should not more than 3 lines, make it simple,
system_prompt=fYou are an interview expert,llm_model=gpt-4-turbo-preview)

return question_response, answer_response

def question_answer_session(typefile):
path = utils.get_files_in_directory(data)
filepath = path[0]
qa_response = interviwer(path=filepath, filetype=typefile)
question, gpt_answer = gpt_interview_questions(qabot_response=qa_response)
st.header(question)
st.markdown()
st.subheader(Reference Answer)
st.write(gpt_answer)

Executing the Application

def main():
st.sidebar.image(image, width=150)
file = st.sidebar.file_uploader(Upload your resume, type=[pdf, docx])
if file is None:
st.subheader(👈 Upload your resume to get started!!!)
utils.remove_existing_files(data)

if file is not None:
utils.save_uploaded_file(file, directory_name=data)
typefile = Path(file.name).suffix

if st.sidebar.button(Next):
question_answer_session(typefile=typefile)

st.sidebar.info(Click this 👆 Button to generate Interview Questions)

if __name__ == __main__:
main()

The AI Interviewer app offers a user-friendly interface for interview preparation:

Upload Resume: Users can upload their resume in PDF or DOCX format.

Generate Interview Questions: Upon uploading the resume, the app extracts skills and generates tailored interview questions.

View Reference Answers: Users can view reference answers corresponding to the generated interview questions.

Conclusion

AI Interviewer app, interview preparation becomes more efficient and effective. By leveraging AI capabilities, users can extract skills from their resumes and generate interview questions tailored to their expertise. Elevate your interview preparation today with the AI Interviewer app from Lyzr!

References

AI Interviewer GitHub Repository
Lyzr Website
Book a Demo
Lyzr Community Channels: Discord, Slack

Leave a Reply

Your email address will not be published. Required fields are marked *