Skip to content

RedPlanetHQ/core

Repository files navigation

CORE logo

CORE: Your Digital Brain - Memory Agent + Actions for AI Tools

Add to Cursor Deploy on Railway

Website Docs Discord


Your AI forgets. Every new chat starts with "let me give you some context." Your critical decisions, preferences, and insights are scattered across tools that don't talk to each other. Your head doesn't scale.

CORE is your memory agent. Not a database. Not a search box. A digital brain that replicates how human memory actually works—organizing episodes into topics, creating associations, and surfacing exactly what you need, when you need it.

For Developers

CORE is a memory agent that gives your AI tools persistent memory and the ability to act in the apps you use.

How it helps Claude Code:

  • Preferences → Surfaces during code review (formatting, patterns, tools)
  • Decisions → Surfaces when encountering similar choices ("why we chose X over Y")
  • Directives → Always available (rules like "always run tests", "never skip reviews")
  • Problems → Surfaces when debugging (issues you've hit before)
  • Goals → Surfaces when planning (what you're working toward)
  • Knowledge → Surfaces when explaining (your expertise level)

Right information, right time—not context dumping.

  • Context preserved across Claude Code, Cursor and other coding agents
  • Take actions in Linear, GitHub, Slack, Gmail, Google Sheets and other apps you use
  • Connect once via MCP, works everywhere
  • Open-source and self-hostable; your data, your control

What You Can Do

1. Never repeat yourself, context flows automatically

CORE becomes your persistent memory layer for coding agents. Ask any AI tool to pull relevant context—CORE's memory agent understands your intent and surfaces exactly what you need.

Search core memory for architecture decisions on the payment service

What CORE does: Classifies as Entity Query (payment service) + Aspect Query (decisions), filters by aspect=Decision and entity=payment service, returns decisions with their reasoning and timestamps.

What are my content guidelines from core to create the blog?

What CORE does: Aspect Query for Preferences/Directives related to content, surfaces your rules and patterns for content creation.

core_as_memory


2. Take actions in your apps from Claude/Cursor

Connect your apps once, take actions from anywhere.

  • Create/Read GitHub, Linear issues
  • Draft/Send/Read an email and store relevant info in CORE
  • Manage your calendar, update spreadsheet

actions


3. Pick up where you left off claude code/cursor

Switching back to a feature after a week? Get caught up instantly.

What did we discuss about the checkout flow? Summarize from memory.
Refer to past discussions and remind me where we left off on the API refactor

claude-code-in-core


What Makes CORE Different

  1. Temporal Context Graph: CORE doesn't just store facts — it remembers the story. When things happened, how your thinking evolved, what led to each decision. Your preferences, goals, and past choices — all connected in a graph that understands sequence and context.

  2. Memory Agent, Not RAG: Traditional RAG asks "what text chunks look similar?" CORE asks "what does the user want to know, and where in the organized knowledge does that live?"

    • 11 Fact Aspects: Every fact is classified (Preference, Decision, Directive, Problem, Goal, Knowledge, Identity, etc.) so core surfaces your coding style preferences during code review, or past architectural decisions when you're designing a new feature.

    • 5 Query Types: CORE classifies your intent (Aspect Query, Entity Lookup, Temporal, Exploratory, Relationship) and routes to the exact search strategy. Looking for "my preferences"? It filters by aspect. "Tell me about Sarah"? Entity graph traversal. "What happened last week"? Temporal filter.

    • Intent-Driven Retrieval: Classification first, search second. 3-4x faster than the old "search everything and rerank" approach (300-450ms vs 1200-2400ms).

  3. 88.24% Recall Accuracy: Tested on the LoCoMo benchmark. When you ask CORE something, it finds what's relevant. Not keyword matching, true semantic understanding with multi-hop reasoning.

  4. You Control It: Your memory, your rules. Edit what's wrong. Delete what doesn't belong. Visualize how your knowledge connects. CORE is transparent, you see exactly what it knows.

  5. Open Source: No black boxes. No vendor lock-in. Your digital brain belongs to you.


Memory Agent vs RAG: Why It Matters

Traditional RAG treats memory as a search problem:

  • Embeds all your text
  • Searches for similarity
  • Returns chunks
  • No understanding of what kind of information you need

CORE Memory Agent treats memory as a knowledge problem:

  • Classifies every fact by type (Preference, Decision, Directive, etc.)
  • Understands your query intent (looking for preferences? past decisions? recent events?)
  • Routes to the exact search strategy (aspect filter, entity graph, temporal range)
  • Surfaces exactly what you need, not everything that might be relevant

Example:

You ask: "What are my coding preferences?"

  • RAG: Searches all your text for "coding" and "preferences", returns 50 chunks, hopes relevant ones are in there
  • CORE: Classifies as Aspect Query (Preference), filters statements by aspect=Preference, returns 5 precise facts: "Prefers TypeScript", "Uses pnpm", "Avoids class components", etc.

The Paradigm Shift: CORE doesn't improve RAG. It replaces it with structured knowledge retrieval.


🚀 Quick Start

Choose your path:

CORE Cloud Self-Host
Setup time 5 minutes 15 minutes
Best for Try quickly, no infra Full control, your servers
Requirements Just an account Docker, 4GB RAM

Cloud

  1. Sign up at app.getcore.me
  2. Connect a source (Claude, Cursor, or any MCP-compatible tool)
  3. Start using CORE to perform any action or store about you in memory

Self-Host

Quick Deploy

Deploy on Railway

Or with Docker

  1. Clone the repository:
git clone https://github.com/RedPlanetHQ/core.git
cd core
  1. Configure environment variables in core/.env:
OPENAI_API_KEY=your_openai_api_key
  1. Start the service
docker-compose up -d

Once deployed, you can configure your AI providers (OpenAI, Anthropic) and start building your memory graph.

👉 View complete self-hosting guide

Note: We tried open-source models like Ollama or GPT OSS but facts generation were not good, we are still figuring out how to improve on that and then will also support OSS models.

🛠️ Installation

Recommended

Install in Claude Code CLI

Method 1: Plugin (Recommended) - ~2 minutes

  1. Install the CORE CLI globally:
npm install -g @redplanethq/corebrain
  1. Add the plugin marketplace and install the plugin:
# In Claude Code CLI, run:
/plugin marketplace add redplanethq/core
/plugin install core_brain
  1. Restart Claude Code and authenticate:
# After restart, login with:
/mcp
# Select core_brain and authenticate via browser

What this does: The plugin automatically loads your personalized "persona" document (summary of your preferences, rules, decisions) at every session start, and enables memory search across all your conversations. No manual configuration needed.

Method 2: Manual MCP Setup (Advanced)

If you prefer manual setup or need customization:

claude mcp add --transport http --scope user core-memory https://mcp.getcore.me/api/v1/mcp?source=Claude-Code

Then type /mcp and open core-memory MCP for authentication.

Install in Cursor

Since Cursor 1.0, you can click the install button below for instant one-click installation.

Install MCP Server

OR

  1. Go to: Settings -> Tools & Integrations -> Add Custom MCP
  2. Enter the below in mcp.json file:
{
  "mcpServers": {
    "core-memory": {
      "url": "https://mcp.getcore.me/api/v1/mcp?source=cursor",
      "headers": {}
    }
  }
}
Install in Claude Desktop
  1. Copy CORE MCP URL:
https://mcp.getcore.me/api/v1/mcp?source=Claude
  1. Navigate to Settings → Connectors → Click Add custom connector
  2. Click on "Connect" and grant Claude permission to access CORE MCP

CLIs

Install in Codex CLI

Option 1 (Recommended): Add to your ~/.codex/config.toml file:

[features]
rmcp_client=true

[mcp_servers.memory]
url = "https://mcp.getcore.me/api/v1/mcp?source=codex"

Then run: codex mcp memory login

Option 2 (If Option 1 doesn't work): Add API key configuration:

[features]
rmcp_client=true

[mcp_servers.memory]
url = "https://mcp.getcore.me/api/v1/mcp?source=codex"
http_headers = { "Authorization" = "Bearer CORE_API_KEY" }

Get your API key from app.getcore.me → Settings → API Key, then run: codex mcp memory login

Install in Gemini CLI

See Gemini CLI Configuration for details.

  1. Open the Gemini CLI settings file. The location is ~/.gemini/settings.json (where ~ is your home directory).
  2. Add the following to the mcpServers object in your settings.json file:
{
  "mcpServers": {
    "corememory": {
      "httpUrl": "https://mcp.getcore.me/api/v1/mcp?source=geminicli",
      "timeout": 5000
    }
  }
}

If the mcpServers object does not exist, create it.

Install in Copilot CLI

Add the following to your ~/.copilot/mcp-config.json file:

{
  "mcpServers": {
    "core": {
      "type": "http",
      "url": "https://mcp.getcore.me/api/v1/mcp?source=Copilot-CLI",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}

IDEs

Install in VS Code

Enter the below in mcp.json file:

{
  "servers": {
    "core-memory": {
      "url": "https://mcp.getcore.me/api/v1/mcp?source=Vscode",
      "type": "http",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}
Install in VS Code Insiders

Add to your VS Code Insiders MCP config:

{
  "mcp": {
    "servers": {
      "core-memory": {
        "type": "http",
        "url": "https://mcp.getcore.me/api/v1/mcp?source=VSCode-Insiders",
        "headers": {
          "Authorization": "Bearer YOUR_API_KEY"
        }
      }
    }
  }
}
Install in Windsurf

Enter the below in mcp_config.json file:

{
  "mcpServers": {
    "core-memory": {
      "serverUrl": "https://mcp.getcore.me/api/v1/mcp/source=windsurf",
      "headers": {
        "Authorization": "Bearer <YOUR_API_KEY>"
      }
    }
  }
}
Install in Zed
  1. Go to Settings in Agent Panel -> Add Custom Server
  2. Enter below code in configuration file and click on Add server button
{
  "core-memory": {
    "command": "npx",
    "args": ["-y", "mcp-remote", "https://mcp.getcore.me/api/v1/mcp?source=Zed"]
  }
}

Coding Agents

Install in Amp

Run this command in your terminal:

amp mcp add core-memory https://mcp.getcore.me/api/v1/mcp?source=amp
Install in Augment Code

Add to your ~/.augment/settings.json file:

{
  "mcpServers": {
    "core-memory": {
      "type": "http",
      "url": "https://mcp.getcore.me/api/v1/mcp?source=augment-code",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}
Install in Cline
  1. Open Cline and click the hamburger menu icon (☰) to enter the MCP Servers section
  2. Choose Remote Servers tab and click the Edit Configuration button
  3. Add the following to your Cline MCP configuration:
{
  "mcpServers": {
    "core-memory": {
      "url": "https://mcp.getcore.me/api/v1/mcp?source=Cline",
      "type": "streamableHttp",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}
Install in Kilo Code
  1. Go to SettingsMCP ServersInstalled tab → click Edit Global MCP to edit your configuration.
  2. Add the following to your MCP config file:
{
  "core-memory": {
    "type": "streamable-http",
    "url": "https://mcp.getcore.me/api/v1/mcp?source=Kilo-Code",
    "headers": {
      "Authorization": "Bearer your-token"
    }
  }
}
Install in Kiro

Add in Kiro → MCP Servers:

{
  "mcpServers": {
    "core-memory": {
      "url": "https://mcp.getcore.me/api/v1/mcp?source=Kiro",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}
Install in Qwen Coder

See Qwen Coder MCP Configuration for details.

Add to ~/.qwen/settings.json:

{
  "mcpServers": {
    "core-memory": {
      "httpUrl": "https://mcp.getcore.me/api/v1/mcp?source=Qwen",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY",
        "Accept": "application/json, text/event-stream"
      }
    }
  }
}
Install in Roo Code

Add to your Roo Code MCP configuration:

{
  "mcpServers": {
    "core-memory": {
      "type": "streamable-http",
      "url": "https://mcp.getcore.me/api/v1/mcp?source=Roo-Code",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}
Install in Opencode

Add to your Opencode configuration:

{
  "mcp": {
    "core-memory": {
      "type": "remote",
      "url": "https://mcp.getcore.me/api/v1/mcp?source=Opencode",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      },
      "enabled": true
    }
  }
}
Install in Copilot Coding Agent

Add to Repository Settings → Copilot → Coding agent → MCP configuration:

{
  "mcpServers": {
    "core": {
      "type": "http",
      "url": "https://mcp.getcore.me/api/v1/mcp?source=Copilot-Agent",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}
Install in Qodo Gen
  1. Open Qodo Gen chat panel in VSCode or IntelliJ
  2. Click Connect more tools, then click + Add new MCP
  3. Add the following configuration:
{
  "mcpServers": {
    "core-memory": {
      "url": "https://mcp.getcore.me/api/v1/mcp?source=Qodo-Gen"
    }
  }
}

Terminals

Install in Warp

Add in Settings → AI → Manage MCP servers:

{
  "core": {
    "url": "https://mcp.getcore.me/api/v1/mcp?source=Warp",
    "headers": {
      "Authorization": "Bearer YOUR_API_KEY"
    }
  }
}
Install in Crush

Add to your Crush configuration:

{
  "$schema": "https://charm.land/crush.json",
  "mcp": {
    "core": {
      "type": "http",
      "url": "https://mcp.getcore.me/api/v1/mcp?source=Crush",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}

Desktop Apps

Install in ChatGPT

Connect ChatGPT to CORE's memory system via browser extension:

  1. Install Core Browser Extension
  2. Generate API Key: Go to Settings → API Key → Generate new key → Name it "extension"
  3. Add API Key in Core Extension and click Save
Install in Gemini

Connect Gemini to CORE's memory system via browser extension:

  1. Install Core Browser Extension
  2. Generate API Key: Go to Settings → API Key → Generate new key → Name it "extension"
  3. Add API Key in Core Extension and click Save
Install in Perplexity Desktop
  1. Add in Perplexity → Settings → Connectors → Add Connector → Advanced:
{
  "core-memory": {
    "command": "npx",
    "args": ["-y", "mcp-remote", "https://mcp.getcore.me/api/v1/mcp?source=perplexity"]
  }
}
  1. Click Save to apply the changes
  2. Core will be available in your Perplexity sessions

Development Tools

Install in Factory

Run in terminal:

droid mcp add core https://mcp.getcore.me/api/v1/mcp?source=Factory --type http --header "Authorization: Bearer YOUR_API_KEY"

Type /mcp within droid to manage servers and view available tools.

Install in Rovo Dev CLI
  1. Edit mcp config:
acli rovodev mcp
  1. Add to your Rovo Dev MCP configuration:
{
  "mcpServers": {
    "core-memory": {
      "url": "https://mcp.getcore.me/api/v1/mcp?source=Rovo-Dev"
    }
  }
}
Install in Trae

Add to your Trae MCP configuration:

{
  "mcpServers": {
    "core": {
      "url": "https://mcp.getcore.me/api/v1/mcp?source=Trae"
    }
  }
}

🔨 Available Tools

CORE Memory MCP provides the following tools that LLMs can use:

  • memory_search: Search relevant context from CORE Memory.
  • memory_ingest: Add an episode in CORE Memory.
  • memory_about_user: Fetches user persona from CORE Memory.
  • initialise_conversation_session: Initialise conversation and assign session id to a conversation.
  • get_integrations: Fetches what relevant integration should be used from the connected integrations.
  • get_integrations_actions: Fetches what tool to be used from that integrations tools for the task.
  • execute_integrations_actions: Execute the tool for that integration .

How It Works: Your Digital Brain

CORE replicates how human memory works. Your brain doesn't store memories as flat text—it organizes episodes into topics, creates associations, and knows where things belong. CORE does the same.

Memory Ingestion: Building Your Knowledge Graph

When you save context to CORE, it goes through four phases:

  1. Normalization: Links new info to recent context, breaks documents into coherent chunks while keeping cross-references

  2. Extraction: Identifies entities (people, tools, projects), creates statements with context and time, maps relationships

  3. Classification: Every fact is categorized into 1 of 11 aspects:

    • Identity: "Manik works at Red Planet" (who you are)
    • Preference: "Prefers concise code reviews" (how you want things)
    • Decision: "Chose Neo4j for graph storage" (choices made)
    • Directive: "Always run tests before PR" (rules to follow)
    • Knowledge: "Expert in TypeScript" (what you know)
    • Problem: "Blocked by API rate limits" (challenges faced)
    • Goal: "Launch MVP by Q2" (what you're working toward)
    • ...and 4 more (Belief, Action, Event, Relationship)
  4. Graph Integration: Connects entities, statements, and episodes into a temporal knowledge graph

Example: "We wrote CORE in Next.js" becomes:

  • Entities: CORE, Next.js
  • Statement: CORE was developed using Next.js (aspect: Knowledge)
  • Relationship: was developed using
  • When: Timestamped and linked to the source episode

Memory Recall: Intent-Driven Retrieval

When you query CORE, the memory agent classifies your intent into 1 of 5 query types:

  1. Aspect Query - "What are my preferences?" → Filters by fact aspect (Preference)
  2. Entity Lookup - "Tell me about Sarah" → Traverses entity graph
  3. Temporal Query - "What happened last week?" → Filters by time range
  4. Exploratory - "Catch me up" → Returns recent session summaries
  5. Relationship Query - "How do I know Sarah?" → Multi-hop graph traversal

Then CORE:

  • Routes to specific handler: No wasted searches—goes straight to the right part of your knowledge graph
  • Re-ranks: Surfaces most relevant and diverse results
  • Filters: Applies time, reliability, and relationship strength filters
  • Returns context: Facts AND the episodes they came from

Traditional RAG: Searches everything, reranks everything (1200-2400ms) CORE Memory Agent: Classifies intent, searches precisely (300-450ms, 3-4x faster)

CORE doesn't just recall facts—it recalls them in context, with time and story, so AI agents respond the way you would remember.


🛠️ For Agent Builders

Building AI agents? CORE gives you memory infrastructure + integrations infrastructure so you can focus on your agent's logic.

What You Get

Memory Infrastructure

  • Temporal knowledge graph with 88.24% LoCoMo accuracy
  • Hybrid search: semantic + keyword + graph traversal
  • Tracks context evolution and contradictions

Integrations Infrastructure

  • Connect GitHub, Linear, Slack, Gmail once
  • Your agent gets MCP tools for all connected apps
  • No OAuth flows to build, no API maintenance

Examples Projects

core-cli — A task manager agent that connects to CORE for memory and syncs with Linear, GitHub Issues.

holo — Turn your CORE memory into a personal website with chat.

Resources


🔥 Research Highlights

CORE memory achieves 88.24% average accuracy in Locomo dataset across all reasoning tasks, significantly outperforming other memory providers.

benchmark
Task Type Description
Single-hop Answers based on a single session
Multi-hop Synthesizing info from multiple sessions
Open-domain Integrating user info with external knowledge
Temporal reasoning Time-related cues and sequence understanding

View benchmark methodology and results →


🔒 Security

CASA Tier 2 Certified — Third-party audited to meet Google's OAuth requirements.

  • Encryption: TLS 1.3 (transit) + AES-256 (rest)
  • Authentication: OAuth 2.0 and magic link
  • Access Control: Workspace-based isolation, role-based permissions
  • Zero-trust architecture: Never trust, always verify

Your data, your control:

  • Edit and delete anytime
  • Never used for AI model training
  • Self-hosting option for full isolation

For detailed security information, see our Security Policy.

Vulnerability Reporting: [email protected]

Documentation

Explore our documentation to get the most out of CORE

🧑‍💻 Support

Have questions or feedback? We're here to help:

Usage Guidelines

Store:

  • Conversation history
  • User preferences
  • Task context
  • Reference materials

Don't Store:

  • Sensitive data (PII)
  • Credentials
  • System logs
  • Temporary data

👥 Contributors

About

Build your digital brain which can talk to your AI apps.

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 13