Reimagining Talent as Infrastructure: Building the AI-First Enterprise
AI-powered talent ecosystems are redefining enterprise success driving faster hiring, agile workforce mobility, ethical AI governance, and measurable growth.
Hands-On Generative AI, also known as GenAI, involves actively engaging with AI capabilities to generate new content across various mediums like text, images, audio, and videos. Generative models, such as Large Language Models (LLMs), are adept at understanding the underlying probability distribution of the training data, empowering them to generate new samples based on the learned distribution.
A breakthrough was made in 2017, when Google Brain released a paper called Attention Is All You Need. This paper established the foundation of LLMs as it introduced Transformers, a new neural network architecture based on attention mechanism. This attention mechanism allowed models to become much better at learning and understanding context in long form text, which was not possible before.
LLMs, such as OpenAI’s GPT models (used in ChatGPT), Google’s PaLM (used in Bard), Meta’s LLaMa, Anthropic’s Claude AI etc., operate by taking natural language text as input and generating corresponding natural language text as output. These are very large generative neural networks that are trained on tokens found in the extensive body of publicly available text data (e.g. books, articles, Wikipedia, software manual pages, GitHub etc.).
Before delving deeper into LLM’s application, it’s essential to understand the concept of prompts and the art of prompt engineering.
Also read: Generative-AI: Accelerating Digital Business Growth
A prompt is a short text or instruction provided by the user to the LLM to obtain a relevant and meaningful response from it. For e.g. Compose a catchy tweet announcing our upcoming product launch event, translate our website’s homepage content from English to Spanish etc.
Prompt engineering is the process of carefully designing and crafting prompts to interact with LLMs effectively. The goal of prompt engineering is to guide the LLMs to produce accurate and relevant outputs for specific tasks or applications.
Prompt consists of –
Important points which need to be considered for good prompts are –
Prompt engineering is a complicated process. This is because prompts need to be carefully crafted and they are specific to problem statements. Prompts for giving restaurant recommendations are going to be extremely different from prompts for a QnA/FAQ application. Below are a few techniques frequently used to engineer specific and good functioning prompts.
Example: Prompt to get sentiment of last text:
Example (Few-shot COT): For example, prompt to get answer for question 3 if initial two question’s answers are provided:
Real world application of Generative AI
Having explored the fundamental concepts of LLMs and the crucial aspects of prompts and prompt engineering, it’s time to dive into the exciting realm of real-world applications of LLMs.
Application #1 – Document query chatbot
Problem statement – An insurance company wants to develop an AI powered chatbot for instant query resolution. Information is mainly available in multiple complex documents.
Solution approach – Load a set of available documents to the server as its knowledge base. These documents are then divided into smaller chunks for the purpose of generating embeddings, which are stored in a vector database. When a user asks a question, the chatbot uses these embeddings to find similar documents in the database and retrieves relevant information. The chatbot then constructs a prompt, including the retrieved documents as context, and instructs LLM to answer user’s question.
Code –
Import Python libraries
Load pdf document that acts as the knowledge base for answering user’s question
After loading the document, it needs to be split into smaller chunks so that embeddings can be created for every chunk.
Above chunks are converted into embeddings and then stored into vector database
Code snippet to match user’s questions with the chunk stored in vector database to get the relevant pieces of information
Below code snippet is responsible for answering user’s question. Documents which are similar to user’s query, are retrieved from the vector database and added as context in prompt.
Now let’s see the chatbot in action.
Output: The cancellation period is 30 days from the date of receipt of the policy.
Output: Air ambulance expenses are covered up to Rs.2,50,000 per hospitalization, not exceeding Rs.5,00,000 per policy period.
Application #2 – Sentiment and Intent Extraction
Problem statement – In today’s data-driven business landscape, understanding customer sentiment and intent from reviews is paramount. Let’s look at basic sentiment and intent extraction from customer reviews/ feedback
Solution approach – This solution processes reviews in batch and returns output. For each review, first, a prompt is generated and then iteratively given as inputs to the LLM for sentiment & intent extraction.
Code –
Creates a formatted prompt for analyzing customer reviews
The analyze_reviews function is designed to extract insights from a collection of customer reviews using the OpenAI‘s GPT-3.5 model.
Let’s process reviews
Conclusion
We’ve explored the art of prompt engineering, a crucial skill in harnessing the power of LLMs for specific tasks. Also, from document query chatbots that simplify query resolution using complex documents as an input to sentiment and intent extraction for data-driven insights, LLMs are revolutionizing how we interact with text data.
Beyond these applications, LLMs find utility across diverse sectors, including content generation, language translation, text to code, and even scientific research. As these models continue to evolve and improve. With careful prompt engineering and innovative applications, LLMs are ready to reshape the way we communicate and solve problems in countless domains.
AI-powered talent ecosystems are redefining enterprise success driving faster hiring, agile workforce mobility, ethical AI governance, and measurable growth.
Embedded finance isn’t merely a product evolution, it’s a structural shift in how financial services are consumed, delivered, and monetized. For banks, embedded finance must be treated as a strategic opportunity to lead ecosystem value creation and not a defensive response to fintech disruption.
Generative AI is transforming supply chains by reducing decision latency, enabling real-time scenario planning, and turning supply chain intelligence into a strategic business enabler. Discover how GenAI reshapes planning, resilience, and growth.
Altimetrik is committed to protecting your personal information. To apply for a position, you will need to provide your email address and create a login. Your information will be used in accordance with applicable data privacy laws, our Privacy Policy, and our Privacy Notice.
