AI Training Model


OpenBMB, also known as Open Lab for Big Model Base


OpenBMB, also known as Open Lab for Big Model Base, aims to create a large-scale pre trained language model library and related tools, accelerate the training, fine-tuning, and inference of models worth billions or more, reduce the threshold for using large models, work together with domestic and foreign developers to form an open source community for large models, promote the development of the big model ecosystem, achieve standardization, popularization, and practicality of large models, and enable them to fly into thousands of households.

The OpenBMB open source community is jointly supported and initiated by the Natural Language Processing Laboratory of Tsinghua University and the Language Big Model Acceleration Technology Innovation Center of Zhiyuan Research Institute. The initiating team has a strong foundation in natural language processing and pre training model research. In recent years, they have published dozens of high-level papers in top international conferences on model pre training, hint fine-tuning, model compression technology, and other areas.

data statistics

Relevant Navigation

No comments

No comments...