Did the Company discuss any trade secrets?ĥ. We were able to get answers to the following questionsġ. Then we provided these related documents as context and the user question as a prompt to GPT 3.5 API so that it could generate the answer. During testing, for each user question, we generated embedding of the question and queried the vector database (i.e knowledge-base) to get related/similar embeddings. We stored these embeddings in an open-source Vector database (our knowledge-base). Our team used Open AI Embeddings API to generate embeddings of our daily meeting transcripts that were conducted over a one-month period. However the LLMs take this to a whole different level. Enterprises did not mind using the cloud infrastructure of the vendor to store the transcripts as what this NLU could do seemed pretty harmless. Most meeting AI assistants extract summaries and action items.Įssentially these NLU models - and many of these predate the LLMs - were able to summarize, extract topics, keywords and phrases. ![]() E.g, Revenue intelligence products like Gong extract questions and sales blockers in sales conversations. Once the transcript is generated, NLU models offered by the Meeting AI vendor is used to extract insights. With such multi-tenant offerings, transcription and natural language processing takes place on the Vendor cloud. ![]() In the past few years, companies have been primarily using multi-tenant Revenue/Sales Intelligence and Meeting AI SaaS offerings to transcribe business conversations and extract insights.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |