“通往 AGI 之路”知识库没有明确提及自身所使用的大语言模型。但为您提供以下大语言模型相关的信息和资源:
「通往AGI之路」的品牌VI(视觉识别)融合了独特的设计元素,以彩虹色彰显多元性和创新,以鹿的形象象征智慧与优雅,通过非衬线字体展现现代感和清晰性,共同构建了一个充满活力和前瞻性的品牌形象。颜色:我们选择彩虹色作为主要的配色方案,代表多样性、包容性和创新。彩虹色的丰富层次和鲜明对比,象征着人工智能领域的无限可能和多维视角。图案:品牌的标志性图案是一只鹿,它在中文中与「路」谐音,象征着通往AGI未来的道路。鹿的形象优雅而智慧,寓意在追求AGI过程中的品味与睿智。字体设计:我们选择的是简洁现代的非衬线字体,这种字体风格简约而现代,易于阅读,强调了信息传达的清晰度和直接性。「通往AGI之路」是一个充满活力、敢于创新、追求科技美感的品牌。我们的VI不仅仅是视觉上的呈现,它是我们对AGI探索路上多元思维和创新追求的体现。
[title]2.大语言模型介绍NLP's ImageNet moment has arrived:https://thegradient.pub/nlp-imagenet/Google Cloud supercharges NLP with large language models:https://cloud.google.com/blog/products/ai-machine-learning/google-cloud-supercharges-nlp-with-large-language-modelsLaMDA:our breakthrough conversation technology:https://blog.google/technology/ai/lamda/Language Models are Few-Shot Learners:https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a- Paper.pdfPaLM-E:An embodied multimodal language model:https://ai.googleblog.com/2023/03/palm-e-embodied-multimodal-language.htmlPathways Language Model(PaLM):Scaling to 540 Billion Parameters for Breakthrough Performance:https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.htmlPaLM API & MakerSuite:an approachable way to start prototyping and building generative AI applications:https://developers.googleblog.com/2023/03/announcing-palm-api-and-makersuite.htmlThe Power of Scale for Parameter-Efficient Prompt Tuning:https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a- Paper.pdfGoogle Research,2022 & beyond:Language models:https://ai.googleblog.com/2023/01/google-research-2022-beyond-language.html#Langu ageModels
[title]测试各种LLM|Rank|Model|Elo Rating|Description|<br>|-|-|-|-|<br>|1|🥇[vicuna-13b](https://lmsys.org/blog/2023-03-30-vicuna/)|1169|a chat assistant fine-tuned from LLaMA on user-shared conversations by LMSYS|<br>|2|🥈[koala-13b](https://bair.berkeley.edu/blog/2023/04/03/koala)|1082|a dialogue model for academic research by BAIR|<br>|3|🥉[oasst-pythia-12b](https://open-assistant.io/)|1065|an Open Assistant for everyone by LAION|<br>|4|[alpaca-13b](https://crfm.stanford.edu/2023/03/13/alpaca.html)|1008|a model fine-tuned from LLaMA on instruction-following demonstrations by Stanford|<br>|5|[chatglm-6b](https://chatglm.cn/blog)|985|an open bilingual dialogue language model by Tsinghua University|<br>|6|[fastchat-t5-3b](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0)|951|a chat assistant fine-tuned from FLAN-T5 by LMSYS|<br>|7|[dolly-v2-12b](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm)|944|an instruction-tuned open large language model by Databricks|<br>|8|[llama-13b](https://arxiv.org/abs/2302.13971)|932|open and efficient foundation language models by Meta|<br>|9|[stablelm-tuned-alpha-7b](https://github.com/stability-AI/stableLM)|858|Stability AI language models|