AI Training Model

悟道

Beijing Zhiyuan Research Institute (BAAI) launched the subsequent version of Wudao 1.0

Tags:

In June 2021, Beijing Zhiyuan Research Institute (BAAI) launched the subsequent version of Wudao 1.0, Wudao 2.0, as China’s first large-scale intelligent model system. Wudao is a language model aimed at surpassing OpenAI’s GPT-3 and Google’s LaMDA in human thinking. After training with 4.9TB of images and text, and surpassing state-of-the-art (SOTA) levels on 9 benchmarks, Wudao is closer to achieving General Artificial Intelligence (AGI) and human level thinking than any of its peers.
Wudao received 4.9 TB of high-quality English and Chinese image and text training:
1.2TB Chinese text data
2.5TB Chinese graphic data
1.2TB English text data
Wudao is trained based on the open-source MoE system FastMoE. MoE is a machine learning technology that works as follows:
Divide the prediction modeling task into subtasks, train expert (learner) models for each subtask, develop gating models, which learn which expert to consult based on the input to be predicted, and combine predictions. FastMoE enables Wudao to consult different expert models in parallel and switch to the model with the best prediction results. For example, if the input is English text, Wudao will use a predictive model that can generate responses in English text.

data statistics

Relevant Navigation

No comments

No comments...