LogoAIAny
  • Search
  • Collection
  • Category
  • Tag
  • Blog
LogoAIAny

Tag

Explore by tags

LogoAIAny

Learn Anything about AI in one site.

support@aiany.app
Product
  • Search
  • Collection
  • Category
  • Tag
Resources
  • Blog
Company
  • Privacy Policy
  • Terms of Service
  • Sitemap
Copyright © 2025 All Rights Reserved.
  • All

  • 30u30

  • ASR

  • ChatGPT

  • GNN

  • IDE

  • RAG

  • ai-agent

  • ai-api

  • ai-api-management

  • ai-client

  • ai-coding

  • ai-demos

  • ai-development

  • ai-framework

  • ai-image

  • ai-image-demos

  • ai-inference

  • ai-leaderboard

  • ai-library

  • ai-rank

  • ai-serving

  • ai-tools

  • ai-train

  • ai-video

  • ai-workflow

  • AIGC

  • alibaba

  • amazon

  • anthropic

  • audio

  • blog

  • book

  • bytedance

  • chatbot

  • chemistry

  • claude

  • course

  • deepmind

  • deepseek

  • engineering

  • foundation

  • foundation-model

  • gemini

  • github

  • google

  • gradient-booting

  • grok

  • huggingface

  • LLM

  • llm

  • math

  • mcp

  • mcp-client

  • mcp-server

  • meta-ai

  • microsoft

  • mlops

  • NLP

  • nvidia

  • ollama

  • openai

  • paper

  • physics

  • plugin

  • pytorch

  • RL

  • science

  • sora

  • translation

  • tutorial

  • vibe-coding

  • video

  • vision

  • xAI

  • xai

Icon for item

PyTorch

2016
Meta AI, Adam Paszke +3

Open-source deep-learning framework for building, training and deploying neural networks on GPUs and CPUs.

ai-developmentai-frameworkmeta-aiai-trainpytorch
Icon for item

nanoGPT

2022
Andrej Karpathy

nanoGPT is the simplest, fastest repository for training/finetuning medium-sized GPTs. It is a rewrite of minGPT that prioritizes practicality over education. Still under active development, but currently the file train.py reproduces GPT-2 (124M) on OpenWebText, running on a single 8XA100 40GB node in about 4 days of training.

githubLLMtutorialai-trainopenai+1
Icon for item

ONNX

2017
ONNX Project Contributors, Meta (Facebook) +1

ONNX (Open Neural Network Exchange) is an open ecosystem that provides an open source format for AI models, including deep learning and traditional ML. It defines an extensible computation graph model, built-in operators, and standard data types, focusing on inferencing capabilities. Widely supported across frameworks and hardware, it enables interoperability and accelerates AI innovation.

ai-frameworkmlopsai-inferenceai-servingpytorch+2
Icon for item

X-AnyLabeling

2023
Wei Wang, CVHub

X-AnyLabeling is a powerful annotation tool integrated with an AI engine for fast and automatic labeling. Designed for multi-modal data engineers, it offers industrial-grade solutions for complex tasks. Supports images and videos, GPU acceleration, custom models, one-click inference for all task images, and import/export formats like COCO, VOC, YOLO. Handles classification, detection, segmentation, captioning, rotation, tracking, estimation, OCR, VQA, grounding, etc., with various annotation styles including polygons, rectangles, rotated boxes.

githubai-toolsvisionai-imageai-video+4
Icon for item

DiffSynth-Studio

2023
ModelScope Community, Artiprocher

DiffSynth-Studio is an open-source Diffusion model engine developed and maintained by the ModelScope Community, focusing on image and video generation. It supports mainstream models like FLUX, Wan, and Qwen-Image, offering efficient memory management and flexible training frameworks. Key features include VRAM optimization, low-memory inference, LoRA/ControlNet training, and innovative techniques like EliGen and Nexus-Gen for pushing generative model boundaries.

githubAIGCai-toolsai-imageai-video+5
Icon for item

SGLang

2024
LMSYS Org

SGLang is a high-performance serving framework for large language models (LLMs) and vision-language models, designed for low-latency and high-throughput inference across single GPUs to large distributed clusters. Key features include RadixAttention for prefix caching, zero-overhead batch scheduling, prefill-decode disaggregation, speculative decoding, continuous batching, paged attention, tensor/pipeline/expert/data parallelism, structured outputs, chunked prefill, and quantization (FP4/FP8/INT4/AWQ/GPTQ). It supports a wide range of models like Llama, Qwen, DeepSeek, and hardware from NVIDIA, AMD, Intel, TPUs, with an intuitive frontend for LLM applications.

llmai-servingai-inferencenvidiapytorch+3
  • Previous
  • 1
  • Next