Yong Jiang

Chief Architect, Dify

Responsible for Dify.AI best practice exploration and architecture design. He is a 90% E person, but likes to toss around cutting-edge technologies, and believes that Code is the purest thing to do. He has rich experience in software engineering, service high availability and data processing, and has independently built a Notion-like note-based knowledge base backend service with over one million users; he has a deep understanding and practice in the RAG field, and has shared his knowledge in related fields in Vector Database Conference, A2M Conference, and Artificial Intelligence Summit for many times.

Topic

RAG 关键技术及未来趋势发展

With the development of Large Language Models (LLMs), which have become a part of our life and work, they have changed the way we interact with information with amazing versatility and intelligence. However, the ‘illusionary’ nature of LLMs and the knowledge lag, as well as the fact that LLMs are black-box models, make them less interpretable and debuggable. It is against this background that RAG was born and has become a major trend in the AI era. RAG effectively mitigates the illusion problem, improves the speed of knowledge update, and enhances the traceability of content generation, making large language models more practical and credible in practical applications. However, with the technological enhancement of multimodal and long lower text and Agents, how to empower RAG systems becomes the next problem to be solved. In this session, we will take you to a comprehensive understanding of RAG, discuss its key technologies and future trends, provide you with an in-depth and systematic understanding of RAG, and elaborate on the latest advances in retrieval enhancement technologies and key challenges.