LogoAIAny
  • Search
  • Collection
  • Category
  • Tag
  • Blog
LogoAIAny

Tag

Explore by tags

LogoAIAny

Learn Anything about AI in one site.

support@aiany.app
Product
  • Search
  • Collection
  • Category
  • Tag
Resources
  • Blog
Company
  • Privacy Policy
  • Terms of Service
  • Sitemap
Copyright © 2026 All Rights Reserved.
  • All

  • 30u30

  • ASR

  • ChatGPT

  • GNN

  • IDE

  • RAG

  • ai-agent

  • ai-api

  • ai-api-management

  • ai-client

  • ai-coding

  • ai-demos

  • ai-development

  • ai-framework

  • ai-image

  • ai-image-demos

  • ai-inference

  • ai-leaderboard

  • ai-library

  • ai-rank

  • ai-serving

  • ai-tools

  • ai-train

  • ai-video

  • ai-workflow

  • AIGC

  • alibaba

  • amazon

  • anthropic

  • audio

  • blog

  • book

  • bytedance

  • chatbot

  • chemistry

  • claude

  • course

  • deepmind

  • deepseek

  • engineering

  • foundation

  • foundation-model

  • gemini

  • github

  • google

  • gradient-booting

  • grok

  • huggingface

  • LLM

  • llm

  • math

  • mcp

  • mcp-client

  • mcp-server

  • meta-ai

  • microsoft

  • mlops

  • NLP

  • nvidia

  • ocr

  • ollama

  • openai

  • paper

  • physics

  • plugin

  • pytorch

  • RL

  • science

  • sora

  • translation

  • tutorial

  • vibe-coding

  • video

  • vision

  • xAI

  • xai

Icon for item

LightRAG

2024
Zirui Guo, Lianghao Xia +3

LightRAG is an open-source framework designed for simple and fast Retrieval-Augmented Generation (RAG), integrating knowledge graphs, vector search, and efficient LLM-based processing to enhance question-answering over large document collections.

RAGLLMNLPgithubai-development+5
Icon for item

nanoGPT

2022
Andrej Karpathy

nanoGPT is the simplest, fastest repository for training/finetuning medium-sized GPTs. It is a rewrite of minGPT that prioritizes practicality over education. Still under active development, but currently the file train.py reproduces GPT-2 (124M) on OpenWebText, running on a single 8XA100 40GB node in about 4 days of training.

githubLLMtutorialai-trainopenai+1
Icon for item

KTransformers

2024
MADSys Lab, Tsinghua University, Approaching.AI +17

KTransformers is a flexible framework for experiencing cutting-edge optimizations in LLM inference and fine-tuning, focusing on CPU-GPU heterogeneous computing. It consists of two core modules: kt-kernel for high-performance inference kernels and kt-sft for fine-tuning. The project supports various hardware and models like DeepSeek series, Kimi-K2, achieving significant resource savings and speedups, such as reducing GPU memory for a 671B model to 70GB and up to 28x acceleration.

githubllmai-inferenceai-trainai-framework+3
Icon for item

BettaFish (微舆)

2025
Jiang Hang (666ghj), Beijing University of Posts and Telecommunications

BettaFish (Weiyu) is an open-source multi-agent sentiment analysis system built from scratch without relying on any frameworks. It enables users to analyze public opinion across 30+ major domestic and international social media platforms and millions of comments through chat-like interactions, breaking information silos, revealing true sentiments, predicting trends, and aiding decision-making. Key features include AI-driven global monitoring, multimodal analysis, agent forum collaboration, and lightweight, extensible deployment.

ai-agentai-toolsai-developmentgithubRAG
Icon for item

Spec Kit

2024
GitHub, Den Delimarsky +1

Spec Kit is an open-source toolkit designed to enable Spec-Driven Development, helping developers create high-quality software by focusing on detailed specifications and leveraging AI coding agents for efficient implementation.

ai-codingai-developmentai-agentgithub
Icon for item

SGLang

2024
LMSYS Org

SGLang is a high-performance serving framework for large language models (LLMs) and vision-language models, designed for low-latency and high-throughput inference across single GPUs to large distributed clusters. Key features include RadixAttention for prefix caching, zero-overhead batch scheduling, prefill-decode disaggregation, speculative decoding, continuous batching, paged attention, tensor/pipeline/expert/data parallelism, structured outputs, chunked prefill, and quantization (FP4/FP8/INT4/AWQ/GPTQ). It supports a wide range of models like Llama, Qwen, DeepSeek, and hardware from NVIDIA, AMD, Intel, TPUs, with an intuitive frontend for LLM applications.

llmai-servingai-inferencenvidiapytorch+3
Icon for item

Claude Quickstarts

2024
Anthropic

Claude Quickstarts is a collection of projects by Anthropic to help developers quickly build deployable applications using the Claude API. It includes quickstarts for a customer support agent, financial data analyst, computer use demo, and autonomous coding agent, demonstrating Claude's capabilities in natural language processing, data analysis, computer control, and automated coding.

anthropicclaudeai-apitutorialai-agent+4
Icon for item

Cognee

2025
topoteretes, Vasilije Markovic +3

Cognee is an open-source tool that provides persistent and dynamic AI memory for agents by combining vector search with graph databases, replacing traditional RAG systems with scalable ECL pipelines.

ai-agentRAGLLMgithubai-library+1
Icon for item

Agent Development Kit (ADK)

2025
Google

An open-source, code-first Python toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.

ai-agentgoogleai-frameworkai-developmentai-library+2
Icon for item

MiniMind

2024
Jingyao Gong

MiniMind is an open-source GitHub project that enables users to train a 26M-parameter tiny LLM from scratch in just 2 hours with a cost of 3 RMB. It provides native PyTorch implementations for Tokenizer training, pretraining, supervised fine-tuning (SFT), LoRA, DPO, PPO/GRPO reinforcement learning, and MoE architecture with vision multimodal extensions. It includes high-quality open datasets, supports single-GPU training, and is compatible with Transformers, llama.cpp, and other frameworks, ideal for LLM beginners.

LLMtutorialgithubai-trainRL
Icon for item

Memori

2025
GibsonAI

Memori is an open-source SQL-native memory engine designed for LLMs, AI agents, and multi-agent systems, providing persistent, queryable memory using standard SQL databases with a single line of code integration.

githubai-agentLLMai-developmentai-library+1
Icon for item

MineContext

2025
Volcengine

MineContext is an open-source proactive context-aware AI partner designed to bring clarity and efficiency to your work, study, and creation. It captures and understands your digital world context via screenshots and content comprehension (with future support for multi-modal sources like documents, images, videos, code), and proactively delivers high-quality information such as insights, daily/weekly summaries, to-do lists, and activity records using a context engineering framework.

githubbytedanceai-toolsai-clientai-agent+4
  • Previous
  • 1
  • 2
  • 3
  • More pages
  • 15
  • 16
  • Next