LogoAIAny
Icon for item

Perplexica

Perplexica is an open-source, privacy-focused AI answering engine that runs on your own hardware. It combines private web search (bundled SearxNG) with local and cloud LLMs (supports local models via Ollama and cloud providers) to produce cited answers, file-based Q&A, image/video search, and configurable search modes. It’s designed for self-hosting and developer integration (Docker, API, docs).

Introduction

Perplexica — Detailed Introduction

Perplexica is an open-source, privacy-first AI answering/search engine designed to run entirely on your own hardware or private infrastructure. It aims to be an alternative to closed commercial answering services by combining web search results with language model reasoning while keeping user data local and private.

Key goals and positioning
  • Privacy-first: all searches, history and uploaded files can be stored locally; search uses a self-hosted SearxNG instance by default.
  • Open-source alternative: positioned as a community-driven alternative to proprietary answering engines (e.g., Perplexity AI).
  • Hybrid model support: integrates both local LLMs (via Ollama) and cloud model providers so users can pick trade-offs between latency, cost, and capability.
Main features
  • Support for multiple model providers: local models (Ollama) and cloud providers (OpenAI, Anthropic Claude, Groq, etc.) are supported and can be mixed.
  • Smart search modes: Speed, Balanced, and Quality modes let users choose between fast or deeper answers.
  • Cited answers: answers include cited sources drawn from web search results.
  • Built-in private web search: includes SearxNG bundled in the Docker image to query multiple search engines while maintaining privacy.
  • File uploads and Q&A: upload PDFs, text files and images and ask questions about their contents.
  • Image and video search: returns visual content alongside textual results.
  • Widgets: UI cards for quick lookups (weather, calculations, stocks, etc.).
  • Domain-limited search: scope searches to particular sites or domains (useful for documentation or research).
  • Search history: local history for revisiting and reusing previous searches.
  • Developer API: an API to run searches and integrate Perplexica into other apps.
Architecture & extensibility
  • Perplexica is built on Next.js for the frontend and backend APIs. The repository contains architecture docs describing how search results, retrievers and model backends connect.
  • Web search is performed via SearxNG (bundled) but Perplexica supports pointing to an external SearxNG instance for customization.
  • Model adapters allow connecting to OpenAI-compatible APIs, Ollama, and other providers; users can configure model names, keys, and endpoints.
Installation & quick start

Run the provided Docker image (bundled with SearxNG):

docker run -d -p 3000:3000 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:latest

Then open http://localhost:3000 and complete setup (API keys, model selection, etc.).

Slim (use your own SearxNG)
docker run -d -p 3000:3000 -e SEARXNG_API_URL=http://your-searxng-url:8080 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:slim-latest
From source (non-Docker)
git clone https://github.com/ItzCrazyKns/Perplexica.git
cd Perplexica
npm i
npm run build
npm run start
# then open http://localhost:3000
Troubleshooting / Deployment notes
  • Ollama connection tips (common pitfalls): ensure correct API URL and host exposure (host.docker.internal or 0.0.0.0) and open firewall ports if needed.
  • If using a local OpenAI-API-compatible server, ensure it is reachable from the Perplexica backend and configured with the correct model name.
  • Docker is recommended for simplest setup and included SearxNG integration; advanced users can deploy on private servers, cloud, or via provided one-click deployment templates.
Sponsors & ecosystem
  • The project lists sponsors and partners (Warp, Exa, etc.) and shows available integrations for search APIs and provider dashboards.
  • Community support channels (GitHub issues and Discord) are encouraged for bugs, feature requests, and discussion.
Who is this for?
  • Developers and researchers who prefer self-hosted / privacy-preserving search and question-answering tools.
  • Teams wanting a customizable answer engine that can integrate internal documentation, uploaded files, and private models.
Limitations & roadmap
  • The project is actively developed; some integrations and features (additional sources, widgets, authentication, custom agents) are marked as upcoming.
  • Production-grade deployment may require additional configuration for scaling, authentication, and secure exposure to the internet.

(Repository created on 2024-04-09.)

Information

  • Websitegithub.com
  • AuthorsItzCrazyKns
  • Published date2024/04/09