Strix is an open-source project that provides autonomous AI agents for penetration testing, simulating real hackers to dynamically run code, identify vulnerabilities, and validate them with actual proof-of-concepts for secure application development.
Warp is the fastest way to build with multiple AI agents—from writing code to shipping it. The best overall coding and terminal agent.
High-level, multi-backend deep-learning API for building, training and deploying neural-network models.
Open-source high-performance framework and DSL for serving large language & vision-language models with low-latency, controllable, structured generation.
Gemini API lets developers call Google’s multimodal Gemini family of large language models—covering text, vision, audio and video—to generate, analyze and transform content in their own applications.
Open-source framework that accelerates fine-tuning and full training of transformer LLMs by up to 30 × while cutting VRAM requirements by roughly 90 %, letting developers train custom models quickly on commodity GPUs.
Lightweight PyTorch wrapper that separates research code from engineering, enabling fast, scalable and reproducible deep-learning workflows.
OpenRouter is a unified API gateway and marketplace that lets developers access, compare, and route requests across 400+ large-language models from 60+ providers through a single, fully OpenAI-compatible endpoint.
A lightweight open-source platform for running, managing, and integrating large language models locally via a simple CLI and REST API.
JAX is a high-performance Python library that brings just-in-time compilation, automatic differentiation and easy parallelism to NumPy-style array programming.
NVIDIA TensorRT is an SDK and tool-suite that compiles and optimizes trained neural-network models for ultra-fast, low-latency inference on NVIDIA GPUs.
Open-source deep-learning optimisation library from Microsoft that scales PyTorch training and inference to trillions of parameters with maximum efficiency.