LogoAIAny
  • Search
  • Collection
  • Category
  • Tag
  • Blog
LogoAIAny

Tag

Explore by tags

LogoAIAny

Learn Anything about AI in one site.

support@aiany.app
Product
  • Search
  • Collection
  • Category
  • Tag
Resources
  • Blog
Company
  • Privacy Policy
  • Terms of Service
  • Sitemap
Copyright © 2025 All Rights Reserved.
  • All

  • 30u30

  • ASR

  • ChatGPT

  • GNN

  • IDE

  • RAG

  • ai-agent

  • ai-api

  • ai-api-management

  • ai-client

  • ai-coding

  • ai-development

  • ai-framework

  • ai-image

  • ai-inference

  • ai-leaderboard

  • ai-library

  • ai-rank

  • ai-serving

  • ai-tools

  • ai-train

  • ai-video

  • ai-workflow

  • AIGC

  • alibaba

  • amazon

  • anthropic

  • audio

  • blog

  • book

  • chatbot

  • chemistry

  • claude

  • course

  • deepmind

  • deepseek

  • engineering

  • foundation

  • foundation-model

  • gemini

  • google

  • gradient-booting

  • grok

  • huggingface

  • LLM

  • math

  • mcp

  • mcp-client

  • mcp-server

  • meta-ai

  • microsoft

  • mlops

  • NLP

  • nvidia

  • openai

  • paper

  • physics

  • plugin

  • RL

  • science

  • translation

  • tutorial

  • vibe-coding

  • video

  • vision

  • xAI

  • xai

Icon for item

ChatGPT

2022
OpenAI

ChatGPT helps you get answers, find inspiration and be more productive. It is free to use and easy to try. Just ask and ChatGPT can help with writing, learning, brainstorming and more.

chatbotai-toolsfoundation-modelopenaiai-image
Icon for item

OpenAI API

2020
OpenAI

A cloud-based API that lets developers integrate OpenAI’s cutting-edge language, image, and audio models into their own products via simple RESTful endpoints.

ai-developmentai-apiai-agentai-imageopenai

GPT2: Language Models are Unsupervised Multitask Learners

2019
Alec Radford, Jeffrey Wu +4

This paper introduces GPT-2, showing that large-scale language models trained on diverse internet text can perform a wide range of natural language tasks in a zero-shot setting — without any task-specific training. By scaling up to 1.5 billion parameters and training on WebText, GPT-2 achieves state-of-the-art or competitive results on benchmarks like language modeling, reading comprehension, and question answering. Its impact has been profound, pioneering the trend toward general-purpose, unsupervised language models and paving the way for today’s foundation models in AI.

LLMNLPopenaipaper

Scaling Laws for Neural Language Models

2020
Jared Kaplan, Sam McCandlish +8

reveals that language model performance improves predictably as you scale up model size, dataset size, and compute, following smooth power-law relationships. It shows that larger models are more sample-efficient, and optimally efficient training uses very large models on moderate data, stopping well before convergence. The work provided foundational insights that influenced the development of massive models like GPT-3 and beyond, shaping how the AI community understands trade-offs between size, data, and compute in building ever-stronger models.

LLMNLPopenai30u30paper

GPT3: Language Models are Few-Shot Learners

2020
Tom B. Brown, Benjamin Mann +29

This paper introduces GPT-3, a 175-billion-parameter autoregressive language model that achieves impressive zero-shot, one-shot, and few-shot performance across diverse NLP tasks without task-specific fine-tuning. Its scale allows it to generalize from natural language prompts, rivaling or surpassing prior state-of-the-art models that require fine-tuning. The paper’s impact is profound: it demonstrated the power of scaling laws, reshaped research on few-shot learning, and sparked widespread adoption of large-scale language models, influencing advancements in AI applications, ethical debates, and commercial deployments globally.

LLMNLPopenaipaper

GPT-4 Technical Report

2024
OpenAI, Josh Achiam +279

This paper introduces GPT-4, a large multimodal model that processes both text and images, achieving human-level performance on many academic and professional benchmarks like the bar exam and GRE. It significantly advances language understanding, multilingual capabilities, and safety alignment over previous models, outperforming GPT-3.5 by wide margins. Its impact is profound, setting new standards for natural language processing, enabling safer and more powerful applications, and driving critical research on scaling laws, safety, bias, and the societal implications of AI deployment.

LLMNLPopenaipaper
  • Previous
  • 1
  • Next