Bing Orchestrator Helps Search and LLMs Talk

booleanstringsBoolean Leave a Comment

Search engines and LLMs work on very different principles. Tech companies have tried to marry the two, but integration is challenging.

I have just watched this video (recommended!) about Bing that sheds quite a bit of light on how the search engine and LLM integration works in Bing. The pic above is from the video and shows a diagram of the search engine-AI interaction ruled by the Orchestrator.

The “Sydney Orchestrator” (Or Bing Orchestrator) in Bing integrates ChatGPT-like AI into Microsoft’s search engine, enhancing Bing’s capabilities with GPT. Here’s how it works:

Query Processing and Iteration: There is “prompt chaining” in the background. The Orchestrator processes search queries iteratively. It generates internal queries, refining them each time to get more accurate and contextually relevant search results.

Information Grounding and Relevance: “Grounding” is to deal with AI hallucinations. The Orchestrator ensures that responses are relevant and current, crucial for recent or time-sensitive queries. It also generates iterative queries, broadening the context for the GPT model and providing a more comprehensive understanding of the subject matter. — The “precise” search option in Bing is for grounding.

Coordination Between GPT and Bing’s Indexing: The Orchestrator is an intermediary between GPT’s language processing and Bing’s search indexing.

Join us this week for the always-popular, constantly updated class

“ChatGPT and AI for Sourcing and Recruitment”

November 15-16 (Wed-Thu), 2023.

Leave a Reply

Your email address will not be published. Required fields are marked *