Best learning resources for AI
FastGPT is an open-source AI knowledge-base platform that combines RAG retrieval, visual workflows and multi-model support to build domain-specific chatbots quickly.
The book offers a clear, intuitive introduction to deep learning, breaking down complex mathematical ideas into accessible explanations with vivid illustrations. It covers essential topics like neural networks, backpropagation, optimization, and modern architectures, making it ideal for newcomers and practitioners seeking conceptual clarity. Its impact lies in demystifying deep learning’s core principles, empowering a broad audience to engage with cutting-edge machine learning research and applications, and serving as a valuable bridge between foundational theory and practical implementation in the rapidly evolving AI landscape.
Zero-code CLI & WebUI to fine-tune 100+ LLMs/VLMs with LoRA, QLoRA, PPO, DPO and more.
Microsoft Research approach that enriches RAG with knowledge-graph structure and community summaries.
This paper introduces GPT-4, a large multimodal model that processes both text and images, achieving human-level performance on many academic and professional benchmarks like the bar exam and GRE. It significantly advances language understanding, multilingual capabilities, and safety alignment over previous models, outperforming GPT-3.5 by wide margins. Its impact is profound, setting new standards for natural language processing, enabling safer and more powerful applications, and driving critical research on scaling laws, safety, bias, and the societal implications of AI deployment.
Genkit is an open-source framework that helps JavaScript / TypeScript devs add server-side AI features with Vertex AI and other LLMs.
An open-source, Ray-based framework for scalable Reinforcement Learning from Human Feedback (RLHF).
Next.js-based open-source chat platform with plugin marketplace.
This paper presents DeepSeek-V2, a 236B-parameter open-source Mixture-of-Experts (MoE) language model that activates only 21B parameters per token, achieving top-tier bilingual (English and Chinese) performance with remarkable training cost savings (42.5%) and inference efficiency (5.76× throughput) compared to previous models. Its innovations—Multi-head Latent Attention (MLA) and DeepSeekMoE—reduce memory bottlenecks and boost specialization. The paper’s impact lies in advancing economical, efficient large-scale language modeling, pushing open-source models closer to closed-source leaders, and paving the way for future multimodal and AGI-aligned systems.
An integration & tooling platform that equips AI agents and LLM apps with 300-plus pre-built, authenticated tools and event triggers.
Run SQL over files, apps and 40+ Services, plus expose the engine to LLMs through MCP or a MySQL-compatible wire protocol.
Lightning-fast engine that lets you serve any AI model—LLMs, vision, audio—at scale with zero YAML and automatic GPU autoscaling.