Minimalist web-searching platform with an AI assistant that runs directly from your browser. Uses WebLLM, Wllama and SearXNG. Demo: https://felladrin-minisearch.hf.space
-
Updated
Feb 26, 2026 - TypeScript
Minimalist web-searching platform with an AI assistant that runs directly from your browser. Uses WebLLM, Wllama and SearXNG. Demo: https://felladrin-minisearch.hf.space
◉ Universal Intelligence: AI made simple.
Rocus is an intelligent, AI-powered knowledge graph that automatically organizes your saved websites using local machine learning models. Built with Vue.js, D3.js, and WebGPU-accelerated AI, Rocus runs entirely in your browser with no servers, no cloud, complete privacy.
WhyAI – Inteligencia artificial generativa híbrida (offline y online) en navegador, privada y sin cuentas.
A privacy-first AI writing assistant running entirely in your browser using WebGPU and Llama-3. No server, no data leakage.
PII Redactor for Gemini is a privacy first browser extension that runs a local, in-browser LLM between you and Gemini. It automatically identifies and redacts personally identifiable information, sensitive keywords, and custom data patterns before your prompt ever leaves your device.
Use a LLM locally, with privacy and for free. Built using HTML, CSS and JavaScript.
End-to-end structured output for browser LLMs. Constrain Transformers.js/web-llm to valid JSON/SQL via GBNF grammars.
🔒 Empower your writing with PrivAI, a local AI tool that ensures privacy while transforming text using advanced language models.
Use the online Gemini, Open AI or its compatible (v1/chat). Use offline with: @mlc-ai/web-llm, transformers.js-Whisper onnx, esearch-ocr. supports PWA to achieve offline conversation translation without restrictions.
This application allows you to run large language models (LLMs) directly in your browser using Web-LLM.
use llm in browser without api calls, with webworker (multithreads in js)
Add a description, image, and links to the web-llm topic page so that developers can more easily learn about it.
To associate your repository with the web-llm topic, visit your repo's landing page and select "manage topics."